text
stringlengths
1
1.88M
meta
dict
\section{Hyperparameter Space Search} In this appendix we present the hyperparameter space explored for C-MFN model. \begin{itemize} \item \textbf{Uni-modal Context Network:} \begin{enumerate} \item This module has three LSTMs. Hidden size for them was chosen randomly from: \begin{itemize} \item For LSTM$_l$:$[32,64,88,128,156,256]$ \item For LSMT$_a$:$[8,16,32,48,64,80]$ \item For LSTM$_v$:$[8,16,32,48,64,80]$ \end{itemize} \end{enumerate} \item \textbf{Multimodal Context Network}: \begin{enumerate} \item We use the optimal configurations as described in \cite{vaswani2017attention} and implemented in \cite{jadore801120}. Some of the main configurations are: \begin{itemize} \item d\_model(output dimension of Encoder):512, \item d\_k(dimension of key):64, \item d\_v(dimension of value):64, \item n\_head(number of heads used in multi-headed attention):8, \item n\_layers(number of layers used in Encoder):6, \item n\_warmup\_steps:4000, \item dropout:0.1 \end{itemize} \item To regularize the output of $D(\Tilde{H})$, we randomly choose a dropout rate from $[0.0,0.2,0.5,0.1]$ \item To regularize the output $D_m(H)$, we use a dropout probability randomly from: \begin{itemize} \item For m=l:$[0.0,0.1, 0.2,0.5]$ \item For m=a:$[0.0,0.2,0.5,0.1]$ \item For m=v:$[0.0,0.2,0.5,0.1]$ \end{itemize} \end{enumerate} \item \textbf{Memory Fusion Network(MFN)}: \begin{enumerate} \item \textbf{System of LSTMs:} Hidden size of LSTM$_m$, $m \in [l,a,v]$ was randomly chosen from: \begin{itemize} \item For LSTM$_l ,[32,64,88,128,156,256]$ \item For LSTM$_a ,[8,16,32,48,64,80]$ \item For LSTM$_v ,[8,16,32,48,64,80]$ \end{itemize} \item \textbf{Delta Memory Attention:}This section has two affine transformation, we call them NN1 and NN2. \begin{itemize} \item The projection shape of NN1 is chosen randomly from $[32,64,128,256]$ and that output goes through a dropout layer whose dropout rate is chosen randomly from $[0.0,0.2,0.5,0.7]$ \item Similarly, the projection shape of NN2 is chosen randomly from $[32,64,128,256]$ followed by a dropout layer whose dropout rate is chosen randomly from $[0.0,0.2,0.5,0.7]$. \end{itemize} \item \textbf{Multi-view gated memory} also has two affine transformation denoted here as Gamma1 and Gamma2. \begin{itemize} \item Gamma1 first does a projection of shape chosen randomly from $[32,64,128,256]$ followed by a dropout whose rate is randomly chosen from $[0.0,0.2,0.5,0.7]$. \item Gamma2 first does a projection and then a dropout. The projection shape is chosen randomly from $[32,64,128,256]$ and dropout rate is chosen from randomly $[0.0,0.2,0.5,0.7]$. \item The memory size of this module is chosen randomly from the set $[64,128,256,300,400]$. \end{itemize} \end{enumerate} \item \textbf{Optimizer} After some trial and error, we found that the model works best for an Adam optimizer\cite{kingma2014adam} initialized with $\beta_1= 0.9, \beta_2 = 0.98$ and $\epsilon = 10^{\minus 9}$. The learning rate was varied by the formula learning\_rate = $d^{−0.5}_{model}$ * min(step\_num$^{\minus 0.5},$step\_num * warmup\_steps$^{\minus 1.5})$. The optimizer and the scheduler is identical to the one chosen in \cite{vaswani2017attention} \end{itemize} \section{Experiments}\label{sec:experiments} In the experiments of this paper, our goal is to establish a performance baseline for the UR-FUNNY dataset. Furthermore, we aim to understand the role of context and punchline, as well as role of individual modalities in the task of humor detection. For all the experiments, we use the proposed contextual extension of Memory Fusion Network (MFN), called C-MFN (Section \ref{subsec:cmfn}). Aside the proposed \textbf{C-MFN} model, the following variants are also studied: \newline \noindent \textbf{C-MFN (P):} This variant of the C-MFN uses only punchline with no contextual information. Essentially, this is equivalent to a MFN model since initialization trick is not used. \newline \noindent \textbf{C-MFN (C):} This variant of the C-MFN uses only contextual information without punchline. Essentially, this is equivalent to removing the MFN and directly conditioning the humor prediction on the Unimodal and Multimodal Context Network outputs (Sigmoid activated neuron after applying $D_M; m \in M$ on $H$ and $D$ on $\hat{H}$). \newline The above variants of the C-MFN allow for studying the importance of punchline and context in modeling humor. Furthermore, we compare the performance of the C-MFN variants in the following scenarios: \textbf{(T)} a only text modality is used without vision and acoustic, \textbf{(T+V)} text and vision modalities are used without acoustic, \textbf{(T+A)} text and acoustic modalities are used without vision, \textbf{(A+V)} only vision and acoustic modalities are used, \textbf{(T+A+V)} all modalities are used together. We compare the performance of C-MFN variants across the above scenarios. This allows for understanding the role of context and punchline in humor detection, as well as the importance of different modalities. All the models for our experiments are trained using categorical cross-entropy. This measure is calculated between the output of the model and ground-truth labels. \section{Introduction} Humor is a unique communication skill that removes barriers in conversations. Research shows that effective use of humor allows a speaker to establish rapport~\cite{stauffer1999let}, grab attention~\cite{wanzer2010explanation}, introduce a difficult concept without confusing the audience \cite{garner2005humor} and even to build trust~\cite{vartabedian1993humor}. Humor involves multimodal communicative channels including effective use of words (text), accompanying gestures (vision) and sounds (acoustic). Being able to mix and align those modalities appropriately is often unique to individuals, attributing to many different styles. Styles include gradually building up to a punchline using text, audio, video or in combination of any of them, a sudden twist to the story with an unexpected punchline~\cite{ramachandran1998neurology}, creating a discrepancy between modalities (e.g., something funny being said without any emotion, also known as dry humor), or just laughing with the speech to stimulate the audience to mirror the laughter~\cite{provine1992contagious}. Modeling humor using a computational framework is inherently challenging due to factors such as: 1) \textit{Idiosyncrasy}: often humorous people are also the most creative ones~\cite{hauck1972relationship}. This creativity in turn adds to the dynamic complexity of how humor is expressed in a multimodal manner. Use of words, gestures, prosodic cues and their (mis)alignments are toolkits that a creative user often experiments with. 2) \textit{Contextual Dependencies}: humor often develops through time as speakers plan for a punchline in advance. There is a gradual build up in the story with a sudden twist using a punchline \cite{ramachandran1998neurology}. Some punchlines when viewed in isolation (as illustrated in Figure \ref{fig:teaser}) may not appear funny. The humor stems from the prior build up, cross-referencing multiple sources, and its delivery. Therefore, a full understanding of humor requires analyzing the context of the punchline. Understanding the unique dependencies across modalities and its impact on humor require knowledge from multimodal language; a recent research trend in the field of natural language processing \cite{zadeh2018proceedings}. Studies in this area aim to explain natural language from three modalities of text, vision and acoustic. In this paper, alongside computational descriptors for text, gestures such as smile or vocal properties such as loudness are measured and put together in a multimodal framework to define humor recognition as a multimodal task. The main contribution of this paper to the NLP community is introducing the first multimodal language (including text, vision and acoustic modalities) dataset of humor detection named ``UR-FUNNY". This dataset opens the door to understanding and modeling humor in a multimodal framework. The studies in this paper present performance baselines for this task and demonstrate the impact of using all three modalities together for humor modeling. \section{Background} The dataset and experiments in this paper are connected to the following areas: \noindent \textbf{Humor Analysis:} Humor analysis has been among active areas of research in both natural language processing and affective computing. Notable datasets in this area include ``16000 One-Liners'' \cite{mihalcea2005making}, ``Pun of the Day'' \cite{yang2015humor}, ``PTT Jokes'' ~\cite{chen2018humor}, ``Ted Laughter''~\cite{chen2017predicting}, and ``Big Bang Theory'' ~\cite{bertero2016deep}. The above datasets have studied humor from different perspectives. For example, ``16000 One-Liner'' and ``Pun of the Day'' focus on joke detection (joke vs. not joke binary task), while ``Ted Laughter'' focuses on punchline detection (whether or not punchline triggers laughter). Similar to ``Ted Laughter'', UR-FUNNY focuses on punchline detection. Furthermore, punchline is accompanied by context sentences to properly model the build up of humor. Unlike previous datasets where negative samples were drawn from a different domain, UR-FUNNY uses a challenging negative sampling case where samples are drawn from the same videos. Furthermore, UR-FUNNY is the only humor detection dataset which incorporates all three modalities of text, vision and audio. Table \ref{table:comparison} shows a comparison between previously proposed datasets and UR-FUNNY dataset. From modeling aspect, humor detection is done using hand-crafted and non-neural models ~\cite{yang2015humor}, neural based RNN and CNN models for detecting humor in Yelp ~\cite{de2017humor} and TED talks ~\cite{chen2017predicting}. Newer approaches have used ~\cite{chen2018humor} highway networks ``16000 One-Liner'' and ``Pun of the Day'' datasets. There have been very few attempts at using extra modalities alongside language for detecting humor, mostly limited to adding simple audio features ~\cite{rakov2013sure, bertero2016deep}. Furthermore, these attempts have been restricted to certain topics and domains (such as ``Big Bang Theory'' TV show \cite{bertero2016deep}). \input{tables/compare_datasets.tex} \noindent \textbf{Multimodal Language Analysis:} Studying natural language from modalities of text, vision and acoustic is a recent research trend in natural language processing \cite{zadeh2018proceedings}. Notable works in this area present novel multimodal neural architectures \cite{wang2018words,pham2018found,hazarika2018conversational,poria2017multi,zadeh2017tensor}, multimodal fusion approaches \cite{liang2018multimodal,tsai2018learning,liu2018efficient,zadeh2018memory,barezi2018modality} as well as resources \cite{poria2018meld,zadeh2018multimodal,zadeh2016mosi,park2014computational,rosas2013multimodal,wollmer2013youtube}. Multimodal language datasets mostly target multimodal sentiment analysis \cite{poria2018multimodal}, emotion recognition \cite{zadeh2018multimodal,busso2008iemocap}, and personality traits recognition \cite{park2014computational}. UR-FUNNY dataset is similar to the above datasets in diversity (speakers and topics) and size, with the main task of humor detection. Beyond the scope of multimodal language analysis, the dataset and studies in this paper have similarities to other applications in multimodal machine learning such language and vision studies, robotics, image captioning, and media description \cite{baltruvsaitis2019multimodal}. \section{Conclusion} In this paper, we presented a new multimodal dataset for humor detection called UR-FUNNY. This dataset is the first of its kind in the NLP community. Humor detection is done from the perspective of predicting laughter - similar to ~\cite{chen2017predicting}. UR-FUNNY is diverse in both speakers and topics. It contains three modalities of text, vision and acoustic. We study this dataset through the lens of a Contextualized Memory Fusion Network (C-MFN). Results of our experiments indicate that humor can be better modeled if all three modalities are used together. Furthermore, both context and punchline are important in understanding humor. The dataset and the accompanying experiments will be made publicly available. \section{ Results and Discussion} \input{tables/performance.tex} The results of our experiments are presented in Table \ref{table:humor_score}. Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models. Punchline is the most important component for detecting humor as the performance of C-MFN (P) is significantly higher than C-MFN (C). Models that use all modalities (T+A+V) outperform models that use only one or two modalities (T, T+A, T+V, A+V). Between text (T) and nonverbal behaviors (A+V), text shows to be the most important modality. Most of the cases, both modalities of vision and acoustic improve the performance of text alone (T+V, T+A). Based on the above observations, each neural component of the C-MFN model is useful in improving the prediction of humor. The results also indicate that modeling humor from a multimodal perspective yields successful results. The human performance~\footnote{This is calculated by averaging the performance of two annotators over a shuffled set of $100$ humor and $100$ non-humor cases. The annotators are given the same input as the machine learning models (similar context and punchline). The annotators agree $84\%$ of times. } on the UR-FUNNY dataset is $82.5\%$. The results from Table \ref{table:humor_score} demonstrate that while a state-of-the-art model can achieve a reasonable level of success in modeling humor, there is still a large gap between human-level performance with state of the art. Therefore, UR-FUNNY dataset presents new challenges to the field of NLP, specifically research areas of humor detection and multimodal language analysis. \section{Multimodal Humor Detection} In this section, we first outline the problem formulation for performing binary multimodal humor detection on UR-FUNNY dataset. We then proceed to study the UR-FUNNY dataset through the lens of a contextualized extension of Memory Fusion Network (MFN) \cite{zadeh2018memory} - a state-of-the-art model in multimodal language. \input{figures/humor_model/unimodal/unimodal.tex} \subsection{Problem Formulation} UR-FUNNY dataset is a multimodal dataset with three modalities of text, vision and acoustic. We denote the set of these modalities as $M=\{t,v,a\}$. Each of the modalities come in a sequential form. We assume word-level alignment between modalities \cite{yuan2008speaker}. Since frequency of the text modality is less than vision and acoustic (i.e. vision and acoustic have higher sampling rate), we use expected visual and acoustic descriptors for each word~\cite{chen2017multimodal}. After this process, each modality has the same sequence length (each word has a single vision and acoustic vector accompanied with it). Each data sample in the UR-FUNNY can be described as a triplet $(l,P,C)$ with $l$ being a binary label for humor or non-humor. $P$ is the punchline and $C$ is the context. Both punchline and context have multiple modalities $P=\{P_m; m \in M\}$, $C=\{C_m; m \in M\}$. If there are $N_C$ context sentences accompanying the punchline, then $C_m = [C_{m,1},C_{m,2},\dots, C_{m,N_C}]$ - simply context sentences start from first sentence to the last ($N_C$) sentence. $K_P$ is the number of words in the punchline and $K_{Cn}|_{n=1}^{N_C}$ is the number of words in each of the context sentences respectively. As examples of this notation, $P_{m,k}$ refers to the $k$th entry in the modality $m$ of the punchline. $C_{m,n,k}$ refers to the $k$th entry in the modality $m$ of the $n$th context. Models developed on UR-FUNNY dataset are trained on triplets of $(l,P,C)$. During testing only a tuple $(P,C)$ is given to predict the $l$. $l$ is the label for laughter, specifically whether or not the inputs $P,C$ are likely to trigger a laughter. \subsection{Contextual Memory Fusion Baseline} \label{subsec:cmfn} Memory Fusion Network (MFN) is among the state-of-the-art models for several multimodal datasets \cite{zadeh2018memory}. We devise an extension of the MFN model, named Contextual Memory Fusion Network~\footnote{Code available through hidden-for-blind-review.}(C-MFN), as a baseline for humor detection on UR-FUNNY dataset. This is done by introducing two components to allow the involvement of context in the MFN model: 1) \textit{Unimodal Context Network}, where information from each modality is encoded using $M$ Long-short Term Memories (LSTM), 2) \textit{Multimodal Context Network}, where unimodal context information are fused (using self-attention) to extract the multimodal context information. We discuss the components of the C-MFN model in the continuation of this section. \subsubsection{Unimodal Context Network}\label{subsec:unimodal_net} To model the context, we first model each modality within the context. Unimodal Context Network (Figure \ref{fig:unimodal}) consists of $M$ LSTMs, one for each modality $m \in M$ denoted as LSTM$_m$. For each context sentence $n$ of each modality $m \in M$, LSTM$_m$ is used to encode the information into a single vector $h_{m,n}$. This single vector is the last output of the LSTM$_m$ over $C_{m,n}$ as input. The recurrence step for each LSTM is the utterance of each word (due to word-level alignment vision and acoustic modalities also follow this time-step). The output of the Unimodal Context Network is the set $H=\{h_{m,n};m \in M, 1 \leq n < N_C\}$. \subsubsection{Multimodal Context Network}\label{subsec:multimodal_net} Multimodal Context Network (Figure \ref{fig:multimodal}) learns a multimodal representation of the context based on the output $H$ of the Unimodal Context Network. Sentences and modalities in the context can form complex asynchronous spatio-temporal relations. For example, during the gradual buildup of the context, the speaker's facial expression may be impacted due to an arbitrary previously uttered sentence. Transformers \cite{vaswani2017attention} are a family of neural models that specialize in finding various temporal relations between their inputs through self-attention. By concatenating representations $h_{m \in M, n}$ (i.e. for all $M$ modalities of the $n$th context), self-attention model can be applied to find asynchronous spatio-temporal relations in the context. We use an encoder with $6$ intermediate layers to derive a multimodal representation $\hat{H}$ conditioned on $H$. $\hat{H}$ is also spatio-temporal (as produced output of encoders in a transformer are). The output of Multimodal Context Network is the output $\hat{H}$ of the encoder. \input{figures/humor_model/multimodal/multimodal.tex} \subsubsection{Memory Fusion Network (MFN)}\label{subsec:mfn} After learning unimodal ($H$) and multimodal ($\hat{H}$) representations of context, we use a Memory Fusion Network (MFN) to model the punchline (Figure \ref{fig:mfn}). MFN contains 2 types of memories: a System of LSTMs with $M$ unimodal memories to model each modality in punchline, and a Multi-view Gated Memory which stores multimodal information. We use a simple trick to combine the Context Networks (Unimodal and Multimodal) with the MFN: we initialize the memories in the MFN using the outputs $H$ (unimodal representation) and $\hat{H}$ (multimodal representation). For System of LSTMs, this is done by initializing the LSTM cell state of modality $m$ with $\mathcal{D}_m(h_{m,1 \leq n< N_C})$. $\mathcal{D}_m$ is a fully connected neural network that maps the information from $h_{m,1 \geq j \geq N_C}$ ($m$th modality in context) to the cell state of the $m$th LSTM in the System of LSTMs. The Multi-view Gated Memory is initialized based on a non-linear projection $\mathcal{D}(\hat{H})$ where $\mathcal{D}$ is a fully connected neural network. Similar to context where modalities are aligned at word level, punchline is also aligned the same way. Therefore a word-level implementation of the MFN is used, where a word and accompanying vision and acoustic descriptors are used as input to the System of LSTMs at each time-step. The Multi-view Gated Memory is updated iteratively at every recurrence of the System of LSTMs using a Delta-memory Attention Network. The final prediction of humor is conditioned on the last state of the System of LSTMs and Multi-view Gated Memory using an affine mapping with Sigmoid activation. \input{figures/humor_model/mfn/mfn.tex} \section{UR-FUNNY Dataset} In this section we present the UR-FUNNY dataset. We first discuss the data acquisition process, and subsequently present statistics of the dataset as well as multimodal feature extraction and validation. \subsection{Data Acquisition} A suitable dataset for the task of multimodal humor detection should be diverse in a) \textit{speakers}: modeling the idiosyncratic expressions of humor may require a dataset with large number of speakers, and b) \textit{topics}: different topics exhibit different styles of humor as the context and punchline can be entirely different from one topic to another. TED talks~\footnote{Videos on \url{www.ted.com} are publicly available for download.} are among the most diverse idea sharing channels, in both speakers and topics. Speakers from various backgrounds, ethnic groups and cultures present their thoughts through a widely popular channel \footnote{More than 12 million subscribers on YouTube \url{https://www.youtube.com/user/TEDtalksDirector}}. The topics of these presentations are diverse; from scientific discoveries to everyday ordinary events. As a result of diversity in speakers and topics, TED talks span across a broad spectrum of humor. Therefore, this platform presents a unique resource for studying the dynamics of humor in a multimodal setup. TED videos include manual transcripts and audience markers. Transcriptions are highly reliable, which in turn allow for aligning the text and audio. This property makes TED talks a unique resource for newest continuous fusion trends \cite{chen2017multimodal}. Transcriptions also include reliably annotated markers for audience behavior. Specifically, the ``laughter'' marker has been used in NLP studies as an indicator of humor \cite{chen2017predicting}. Previous studies have identified the importance of both punchline and context in understanding and modeling the humor. In a humorous scenario, context is the gradual build up of a story and punchline is a sudden twist to the story which causes laughter \cite{ramachandran1998neurology}. Using the provided laughter marker, the sentence immediately before the marker is considered as the punchline and the sentences prior to punchline (but after previous laughter marker) are considered context. \input{figures/excels/data_distribution/data_stats.tex} We collect $1866$ videos as well as their transcripts from TED portal. These $1866$ videos are chosen from $1741$ different speakers and across $417$ topics. The laughter markup is used to filter out $8257$ humorous punchlines from the transcripts~\cite{chen2017predicting}. The context is extracted from the prior sentences to the punchline (until the previous humor instances or the beginning of video is reached). Using a similar approach, $8257$ negative samples are chosen at random intervals where the last sentence is not immediately followed by a laughter marker. The last sentence is assumed a punchline and similar to the positive instances, the context is chosen. This negative sampling uses sentences from the same distribution, as opposed to datasets which use sentences from other distributions or domains as negative sample ~\cite{yang2015humor,mihalcea2005making}. After this negative sampling, there is a homogeneous $50\%$ split in the dataset between positive and negative examples. Using forced alignment, we mark the beginning and end of each sentence in the video as well as words and phonemes in the sentences \cite{yuan2008speaker}. Therefore, an alignment is established between text, audio and video. Utilizing this alignment, the timing of punchline as well as context is extracted for all instances in the dataset. \input{tables/summary.tex} \subsection{Dataset Statistics}\label{sec:dataset} The high level statistics of UR-FUNNY dataset are presented in Table \ref{summary_table}. Total duration of the entire dataset is $90.23$ hours. There are a total of $1741$ distinct speakers and a total of $417$ distinct topics in the UR-FUNNY dataset. Figure \ref{humordata}.e shows the word cloud of the topics based on log-frequency of the topic. The top most five frequent topics are technology, science, culture, global issues and design \footnote{Metadata collected from \url{www.ted.com}}. There are in total $16514$ video segments of humor and not humor instances (equal splits of $8257$). The average duration of each data instance is $19.67$ seconds, with context an average of $14.7$ and punchline with an average of $4.97$ seconds. The average number of words in punchline is $16.14$ and the average number of words in context sentences is $14.80$. Figure \ref{humordata} shows an overview for some of the important statistics of UR-FUNNY dataset. Figure \ref{humordata}.a demonstrates the distribution of punchline for humor and non-humor cases based on number of words. There is no clear distinction between humor and non-humor punchlines as both follow similar distribution. Similarly, Figure \ref{humordata}.b shows the distribution of number of words per context sentence. Both humor and non-humor context sentences follow the same distribution. Majority $(\geq 90\%)$ of punchlines have length less than $32$. In terms of number of seconds, Figure \ref{humordata}.d shows the distribution of punchline and context sentence length in terms of seconds. Figure \ref{humordata}.c demonstrates the distribution of number of context sentences per humor and non-humor data instances. Number of context sentences per humor and non-humor case is also roughly the same. The statistics in Figure \ref{humordata} show that there is no trivial or degenerate distinctions between humor and non-humor cases. Therefore, classification of humor versus non-humor cases cannot be done based on simple measures (such as number of words); it requires understanding the content of sentences. \input{tables/folds.tex} Table \ref{summary2} shows the standard train, validation and test folds of the UR-FUNNY dataset. These folds share no speaker with each other - hence standard folds are speaker independent \cite{zadeh2016mosi}. This minimizes the chance of overfitting to identity of the speakers or their communication patterns. \subsection{Extracted Features} For each modality, the extracted features are as follows: \newline \noindent \textbf{Language:} Glove word embeddings \cite{pennington2014glove} are used as pre-trained word vectors for the text features. P2FA forced alignment model \cite{yuan2008speaker} is used to align the text and audio on phoneme level. From the force alignment, we extract the timing annotations of context and punchline on word level. Then, the acoustic and visual cues are aligned on word level by interpolation \cite{chen2017multimodal}. \newline \noindent \textbf{Acoustic:} COVAREP software \cite{degottex2014covarep} is used to extraction acoustic features at the rate of 30 frame/sec. We extract following 81 features: fundamental frequency (F0), Voiced/Unvoiced segmenting features (VUV) \cite{drugman2011joint}, normalized amplitude quotient (NAQ), quasi open quotient (QOQ) \cite{kane2013wavelet}, glottal source parameters (H1H2, Rd,Rd conf) \cite{drugman2012detection,alku2002normalized,alku1997parabolic}, parabolic spectral parameter (PSP), maxima dispersion quotient (MDQ), spectral tilt/slope of wavelet responses (peak/slope), Mel cepstral coefficient (MCEP 0-24), harmonic model and phase distortion mean (HMPDM 0-24) and deviations (HMPDD 0-12), and the first 3 formants. These acoustic features are related to emotions and tone of speech. \newline \noindent \textbf{Visual:} OpenFace facial behavioral analysis tool \cite{baltruvsaitis2016openface} is used to extract the facial expression features at the rate of 30 frame/sec. We extract all facial Action Units (AU) features based on the Facial Action Coding System (FACS) \cite{ekman1997face}. Rigid and non-rigid facial shape parameters are also extracted \cite{baltruvsaitis2016openface}. We observed that the camera angle and position changes frequently during TED presentations. However, for the majority of time, the camera stays focused on the presenter. Due to the volatile camera work, the only consistently available source of visual information was the speaker's face. \newline UR-FUNNY dataset is publicly available for download alongside all the extracted features.
{ "timestamp": "2019-04-16T02:13:47", "yymm": "1904", "arxiv_id": "1904.06618", "language": "en", "url": "https://arxiv.org/abs/1904.06618" }
\section{Introduction} Developed in molecular physics the celebrated Born-Oppenheimer (BO) approximation (in other words, adiabatic approximation) \cite{BO} (for discussion see \cite{LL}) takes advantage of the large difference in the masses of the nuclei and electrons. This approach allows to separate fast (electronic) degrees of freedom from slow (nuclear) ones. At the first stage in this approximation the nuclei are assumed to be clamped forming a certain spatial configuration, thus, nuclear masses are infinite and internuclear distances become classical, non-dynamical variables. Hence, the original many-body Coulomb problem becomes the problem of electrons interacting with several fixed Coulomb centers. Its dynamics is governed by the electronic Hamiltonian, \begin{equation} \label{Hel} {\cal H}_{e} \ =\ T_e\ +\ V(r_e; R_N)\ ,\ \end{equation} where the electronic kinetic energy \[ T_e\ =\ -\frac{1}{2}\sum_i \nabla^2_i (r_e)\ , \] is sum of kinetic energies of individual electrons with $\hbar=1, m_e=1$ and the Coulomb potential energy, which is the sum of three terms, \[ V(r_e; R_N)\ =\ V_n(R_N)\ +\ V_{en}(r_e, R_N)\ +\ V_e(r_e)\ , \] corresponds to internuclear interaction, electron-nuclear interaction and electron-electron interaction, respectively; the coordinates of the nuclei $R_N$ are fixed becoming classical, they play the role of parameters in the Hamiltonian (\ref{Hel}). Due to translation-invariant nature of the Coulomb potential $V(r_e; R_N)$ the relative distances between particles appear in the potential only. One can calculate in a straightforward way, usually, numerically, the spectra of the electronic Hamiltonian ${\cal H}_{e}$ getting the spectra of the electronic energy $E_e$, in other words, the electronic terms. The obtained electronic energy depends on the nuclear configuration, thus, leading to the so-called Potential Energy Surface (PES) $V(R_N)$, where the nuclear configuration occurs as the argument. It implies that the internuclear distances $R_{ij}=|{\bf R}_i-{\bf R}_j|$ are not dynamical. In the next stage the internuclear distances $R_{ij}$ are restored as dynamical variables and PES $V(R_N)$ is taken as the potential for nuclear motion. It is worth noting that in this approach the center-of-mass of the original many-body Coulomb system differs {\it slightly} from the center-of-mass of nuclei. It does not affect PES but it leads to some corrections in study of nuclear motion (for discussion, see \cite{Cederbaum:2013} and references therein). It will be neglected in the present study, since we are focused on the properties of PES. It must be emphasized that for polyatomic molecules PES $V(R_N)$ can be written as sum over 2-,3- etc body interactions. Schematically, it can be written as \[ V(R_N) \ =\ V_2(R)\ +\ V_3(R_{12}, R_{13}, R_{23})\ +\ \ldots \quad . \] In the present paper we will be focussed on pairwise potentials $V(R)$ which define dimers. In physically-important case of diatomic molecules and ions, in particular, of single positively-charged diatomic molecular ion $(A + B)^+$ with nuclear charges $Z_{A}$ and $Z_{B}$, respectively, and $(Z_A+Z_B-1)$ electrons, on which we will be focussed, the nuclear configuration is defined by the single internuclear (classical) distance $R$. In this case the PES becomes the Potential Energy Curve (PEC) $E_e=V(R)$. From physics point of view, taking charges $Z_{A,B}$ as probes, the potential \[ V(R)=\frac{Z_{A} Z_{B}}{R}\,S(R)\ , \] measures the screening $S(R)$ of Coulomb interaction of nuclei due to the presence of electronic media. Sometimes, the interplay between the Coulomb repulsion at small distances and the Van-der-Waals attraction at large distances leads to sufficiently deep minimum in $V(R)$ at distances of order 1 a.u. (other than Van-der-Waals minimum) manifesting the existence of the bound states and, finally, the molecule. In the rotation frame the potential $V(R)$ is accompanied by centrifugal term. These states are called rovibrational states. Since early days of quantum mechanics the main attention was given to creation of the models which describe the vicinity of the potential well accurately: harmonic and Morse oscillators, P\"oschl-Teller, Lennard-Jones and Buckingham potentials, and their numerous modifications, see e.g. \cite{Yanar:2016} and references therein and into. Usually, these models were pure phenomenological, they have never pretended to describe the whole potential curve and total rovibrational spectra other than several lowest states situated close to minimum of PEC and the so-called anharmonicity constants. A new development happened in \cite{OT:2018} where for the first time for simple H$_2^+$, H$_2$, (HeH) molecules a certain (generalized) meromorphic potentials of a new type were proposed which admitted to find the whole rovibrational spectra for a set of electronic terms (PEC) and even some transition amplitudes \cite{OT:2016}. Those potentials were characterized by correct asymptotic behavior at small and large internuclear distances. They had a form of (generalized) rational functions modified by exponential terms for homonuclear diatomic molecules only. The goal of this paper is to construct simple analytic potentials modelling PEC which are able to describe the rovibrational spectra of dimers. Note that these potentials result from interpolation between small and large distance behavior of PEC and, in general, they do not assume {\it a priori} the existence of the global minimum at $R \sim 1$\,a.u. As an illustration of the general construction, molecules He$_2^+$ and $^7$LiH will be considered. \section{Generalities} Making analysis of the electronic Hamiltonian for $(A + B)^+$ ion one can find that the potential $V(R)$ at small $R$ is defined via perturbation theory in $R$, \begin{equation} \label{VRsmall} V(R)\ =\ \frac{Z_{A} Z_{B}}{R}\ +\ E_a\ +\ E_1\, R + O(R^2) \ , \end{equation} where the first (classical) term comes from the Coulomb repulsion of nuclei, the second term $E_a$ is defined by the energy of united ion with total nuclear charge $(Z_{A} + Z_{B})$ with $(Z_{A} + Z_{B}-1)$ electrons, see e.g. \cite{LL}, $\S$ 80. It was observed that the linear term is always absent, $E_1=0$ \cite{RAB:1958,WAB:1959,OT:2018}. At large distances $R$ for the ground state potential curve the leading term of the interaction of neutral atom with charged atomic ion is given by Van-der-Waals attraction term with corrections in powers of $1/R$, \begin{equation} \label{VRlarge} V(R)\ =\ -\frac{c_4}{R^{4}}\ +\ \frac{c_5}{R^{5}}\ +\ \frac{c_6}{R^{6}}\ +\ \ldots \ ,\ c_4 > 0\ , \end{equation} see \cite{MK:1971} and as for recent extended discussion, \cite{IK:2006}. Here the parameters $c_{4, \ldots}$ are related with (hyper)polarizabilities of different orders, see \cite{LL}, $\S$ 89. For the case of neutral dimer (in fact, for two neutral atoms at large internuclear distances) the first two coefficients are usually absent, $c_4=c_5=0$. It is evident that the attraction at large distances together with repulsion at small distances implies the existence of the minimum of the potential curve. If this minimum is situated at large distances and very shallow, this minimum is usually called the Van-der-Waals minimum. It was shown that for the interaction of ion and neutral atom, each of them is in its respectful ground state, the coefficient $c_5=0$ and, hence, the term $\sim \frac{1}{R^{5}}$ in (\ref{VRlarge}) is absent ~\cite{MK:1971,IK:2006}. The expansion (\ref{VRlarge}) remains the same functionally for both dissociation channels: $A^+ + B$ and $A + B^+$, while evidently the expansion (\ref{VRsmall}) at small distances remains simply the same for both channels. In many cases the known potential curves are smooth monotonous curves with slight irregularities due to level (quasi)-crossing effects at complex $R$, see e.g. \cite{MK:1971} and \cite{LL}. It hints to interpolate the expansions (\ref{VRsmall})-(\ref{VRlarge}) using two-point Pade approximation \begin{equation} \label{Pade} V_{ion}(R)\ =\ \frac{Z_{A} Z_{B}}{R}\ \frac{P_N(R)}{Q_{N+3}(R)}\ \equiv\ \frac{Z_{A} Z_{B}}{R}\ \mbox{Pade}(N / N+3)\ , \end{equation} where $P_N, Q_{N+3}$ are polynomials in $R$ of degrees $N$, $(N+3)$, respectively, with $P_N(0)= Q_{N+3}(0)=1$ as it was introduced for the first time in \cite{OT:2018} for H$_2^+$ molecular ion with condition $Q_{N+3} > 0$ for $R>0$. Condition of positivity of denominator $Q_{N+3}(R)$ in (\ref{Pade}) (implying the absence of real positive roots) leads to constraints on its coefficients, not for any $N$ it can be fulfilled. This formula seems applicable for {\it any} single-positively charged diatomic molecular ion, for both hetero- and homo-nuclear dimers. Similar formula can be constructed for the case of neutral dimers $(A + B)$ with dissociation channel $A + B$, \begin{equation} \label{PadeN} V_{neutral}(R)\ =\ \frac{Z_{A} Z_{B}}{R}\ \frac{P_N(R)}{Q_{N+5}(R)}\ \equiv\ \frac{Z_{A} Z_{B}}{R}\ \mbox{Pade}(N / N+5)\ , \end{equation} where $P_N, Q_{N+5}$ are polynomials in $R$ of degrees $N$, $(N+5)$, respectively, with $P_N(0)= Q_{N+5}(0)=1$ as it was introduced in \cite{OT:2018} for H$_2$ molecule with condition $Q_{N+5} > 0$ for $R>0$. Condition of positivity of denominator $Q_{N+5}(R)$ in (\ref{PadeN}) implies the absence of real positive roots, it leads to constraints on its coefficients, also not for any integer $N$ it can be fulfilled. Parameter $N$ in (\ref{Pade}) and (\ref{PadeN}) takes integer values, $N=0,1,\ldots$. Polynomials $P_N, Q_{N+k}$ are chosen in such a way that some free parameters in $\mbox{Pade}(N/ N+k)$ for $k=3,5$ are fixed in order to reproduce exactly several leading coefficients in the both expansions (\ref{VRsmall})-(\ref{VRlarge}). Remaining free parameters are found to reproduce the potential at some points in $R$ where it can be known numerically from solving the electronic Schr\"odinger equation. It might be considered as the surprising fact that the leading coefficients in the expansions (\ref{VRsmall})-(\ref{VRlarge}) {\it know} about the existence/non-existence of the global, non-Van-der-Waals minimum. If such a minimum exists remaining free parameters in (\ref{Pade}) and (\ref{PadeN}) can be found by expanding $V_{ion}(R), V_{neutral}(R)$ near minimum in Taylor expansion \[ V(R)\ =\ -D_e\ +\ \sum_n f_n (R - R_{eq})^n\ , \] where $D_e$ is dissociation energy, $R_{eq}$ is equilibrium distance (it corresponds to position of the minimum) and $f_n$ are the molecular or (an)harmonic constants, or, in Dunham expansion \[ V(R)\ =\ D_e \bigg[\left(1 - e^{-\alpha (R - R_{eq})}\right)^2\ +\ P_3\left(1 - e^{-\alpha (R - R_{eq})}\right)^3\ +\ P_4\left(1 - e^{-\alpha (R - R_{eq})}\right)^4\ +\ \ldots \bigg]\ , \] where $\alpha$ is the Morse constant, $P_{3,4,\ldots}$ are parameters, see \cite{Dunham:1932}. Important characteristics of the PEC is $R=R_0$, where \begin{equation} \label{R0} V_{ion}(R_0)\ =\ V_{neutral}(R_0)\ =\ 0\ , \end{equation} at $R_0 < R_{eq}$. For $R > R_0$ the potential gets negative, the bound states (if exist) are localized in this domain. Behavior of the potential at $R < R_0$ on the bound states positions needs to be investigated. In the case of identical nuclei $A=B$ (homonuclear case) the system $(A + A)^+$ is permutationally-invariant $Z_A \leftrightarrow Z_B$ and the extra quantum number - parity with respect to interchange of the nuclei positions - occurs. The exchange energy (or, saying differently, the energy gap) - the difference between two potential curves $\Delta E=(E_- - E_+)$ of the first excited state (of the negative parity) $E_-$ and of the ground state (of the positive parity) $E_+$ - tends to zero exponentially at large $R$, \[ \Delta E\ =\ D\, e^{-S_0}(1 + \frac{e}{R} + \ldots)\ , \] where $D > 0$ is monomial in $R$. Furthermore, the exponent $S_0 = \alpha R$, where the parameter $\alpha$ depends on the molecular ion explored \cite{Chib-Jan:1988}, see below. It implies that these potential curves can be written at large $R$ in the following form, \begin{equation} \label{E+- Rlarge} E_{\mp}\ =\ E_0(R)\ \pm\ \frac{1}{2}\,\delta_{\pm}E(R)\ , \end{equation} where $E_0(R)$ is given by multipole expansion (\ref{VRlarge}), it is the same for lowest energy states of both parities, hence, it does not depends on the state. It is clear that both $\delta_{\pm}E$ are exponentially small. Similar phenomenon of pairing of the states of opposite parities at large $R$ occurs for the potential curves of the excited states. In the profoundly studied case of H$_2^+$ molecular ion, see \cite{Cizek:1986}, the expansion of $\delta_{\pm}E$ looks like the transseries: the expansion in multi-instanton contributions each of them with accompanied perturbation theory in $1/R$ of a special structure similar to one for one-dimensional quartic double-well potential problem, \begin{equation} \label{VRlarge-EXP-pm} \delta E_{\pm} =\ D_0 e^{-S_0} \left(1 + \frac{e}{R}\ +\ O\left(\frac{1}{R^{2}}\right) \right)\ \pm \ D_1 e^{-2 S_0} \left(1 + \frac{e_1}{R}\ +\ O\left(\frac{1}{R^{2}}\right) + a \log R \right)\ +\ \ldots \ , \end{equation} c.f. \cite{Zinn:1981,Dunne-Unsal:2014} (and references therein), where $e, e_1, a$ are constants, $D_0=\frac{4}{e}R$, $D_1 \sim R^3$. The energy gap has the form \begin{equation} \label{VRlarge-EXP} \Delta E(R)\ \equiv\ \frac{\delta E_- + \delta E_+}{2} =\ D_0 e^{-S_0} \left(1 + \frac{e_1}{R}\ +\ O\left(\frac{1}{R^{2}}\right) \right)\ +\ \ldots \ , \end{equation} where exponent $S_0=R$ looks as classical action (one-instanton contribution), $D_0=\frac{4}{e}R$ looks like one-instanton determinant in semi-classical tunneling between two identical Coulomb wells. In all concrete cases, the present authors are familiar with, (H$_2^+$, H$_2$, He$_2^+$, Li$_2^+$, Be$_2^+$) ions the exponent $S_0$ is linear in $R$ with coefficients which depends on the system studied, $S_0=\alpha R$, see \cite{Chang:1995,OT:2018} and references therein (see Table~\ref{tparmsdE}). In turn, in the leading term in (\ref{VRlarge-EXP}) $D \sim R$ for H$_2^+$, $D \sim R^{1/2}$ for He$_2^+$ (see below), $D \sim R^{5/2}$ for H$_2$, see e.g.\cite{OT:2018}, while in all other cases $D \sim R^{\beta}$, hence, it is monomial of some degree $\beta$. It is worth emphasizing that the exponential smallness at large $R$ of the energy gap implies the well-known fact that the expansions (\ref{VRlarge}) for the ground state and the first excited state coincide. In turn, at small $R$ the expansion of the energy gap in $R$ is given by the Taylor series \begin{equation} \label{VRsmall-EXP} \Delta E(R)\ =\ e_a\ +\ e_b R + O(R^2)\ . \end{equation} where $e_a=E_{+}^{u.a}-E_{-}^{u.a.}$ is the difference between the first excited state and the ground state energies of the united atom (u.a.). \begin{center} \begin{table}[!h] \caption{Parameters $D_0, S_0$ of the energy gap in leading approximation (\ref{VRlarge-EXP}) for systems: H$_2^+$~\cite{Cizek:1986}, H$_2$~(see \cite{LL}, $\S$ 81, p.315), He$_2^+$ (present work, see Chapter III), Li$_2^+$ and Be$_2^+$~\cite{Chang:1995,S:2001}.} \begin{tabular}{c| lllll} \\ \hline\hline &\ H$_2^+$\ &\ H$_2$\ &\ He$_2^+$\ &\ Li$_2^+$\ &\ Be$_2^+$\\ \hline \ $\alpha$\quad &\ 1\ &\ 2 \ &\ 1.344\ &\ 0.629 \ &\ 0.829 \\ \ $\beta$ \quad &\ 1 & 5/2 &\ 1/2 & 2.1796 & 1.4125 \\ \hline\hline \end{tabular} \label{tparmsdE} \end{table} \end{center} The next step is to construct an analytic approximation of the exchange energy $\Delta E$ which interpolate the small (\ref{VRsmall-EXP}) and large (\ref{VRlarge-EXP}) internuclear distances. If $D \sim R^n$ in~\re{VRlarge-EXP}, where $n$ is integer, this is realized using two-point Pad\'e type approximation \begin{equation} \label{Pade-n} \Delta E(R)_{\{n_0,n_{\infty}\}}= e^{-S_0}\frac{P_{N+n}(R)}{Q_{N}(R)}|_{\{n_0,n_{\infty}\}} \equiv e^{-S_0}{\rm Pade}[N+n/N]_{\{n_0,n_{\infty}\}}(R), \end{equation} where $P_{N+n}(R)$ and $Q_N(R)$ are polynomials of degrees $N+n$ and $N$ respectively. This approximation suppose to reproduce $n_0$ terms at small and $n_{\infty}$ terms at large internuclear distances expansion, respectively. If $n$ is half-integer, change of variable is needed: $r=\sqrt{R}$. In particular, if $n=5/2$ (the case of H$_2$) and $S_0=2R$, two-point Pad\'e type approximation, \begin{equation} \label{Pade-5/2} \Delta E(R=r^2)_{\{n_0,n_{\infty}\}}= e^{-2\, r^2}\frac{P_{N+5}(r)}{Q_{N}(r)}|_{\{n_0,n_{\infty}\}} \equiv e^{-S_0}{\rm Pade}[N+5/N]_{\{n_0,n_{\infty}\}}(r), \end{equation} The case $n=1/2$ (the He$_2^+$ ion) will be presented in this paper later. In order to reproduce properly the behavior of $n_0$ terms at small \re{VRsmall-EXP} and $n_{\infty}$ terms at large \re{VRlarge-EXP} internuclear distances constraints on the parameters of the polynomials $P_{N+n}(R)$ and $Q_N(R)$ are imposed. Due to the exponential in $R$ dependence of $\delta E_{\pm}$ \re{E+- Rlarge} the main contribution to the energy in the potential curves $E_{\mp}$ at large internuclear distances comes from the mean energy term $E_0(R)$, \begin{equation} E_{0}(R) = \frac{E_+\ +\ E_-}{2}\ , \end{equation} see (\ref{E+- Rlarge}). Neglecting two-instanton contribution ($\sim e^{-2 S_0}$) and higher order exponentially-small contributions, the mean energy $E_0(R)$ expansion at large distances is given by~\re{VRlarge}. On the other hand, at small internuclear distances $E_{0}(R)$ expansion has the same structure as~\re{VRsmall} with $Z_a=Z_b$, where \[ E_a=\frac{E_{+}^{u.a}+E_{-}^{u.a.}}{2}\ , \] is the mean energy of the ground and first excited state of the system in the united atom (u.a.) limit, respectively. The analytic approximation for mean energy $E_0$ which mimics the asymptotic expansions for small \re{VRsmall} and large \re{VRlarge} distances is again two-point Pad\'e approximation of the form~\re{Pade} \begin{equation} E_0(R)_{\{n_0,n_{\infty}\}}= \frac{Z^2}{R}\ \frac{P_{N}(R)}{Q_{N+3}(R)}\bigg|_{\{n_0,n_{\infty}\}} \equiv \frac{1}{R}\ {\rm Pade}[N/N+3]_{\{n_0,n_{\infty}\}}(R)\ . \end{equation} This approximation supposes to reproduce $n_0$ terms at small and $n_{\infty}$ terms at large internuclear distances expansion for mean energy. \bigskip This procedure has already been applied successfully to the diatomic molecular hydrogen ion H$_2^+$ $(p,p,e)$~\cite{OT:2018}. In order to illustrate further an approach to the general theory of the potential curves presented above the diatomic molecular ion He$_2^+$ $(\alpha,\alpha,3e)$ will be considered as an example. Our concrete goal is to construct a simple analytic expressions for the PEC of the ground state $X^2 \Sigma_u^+$ and the first excited state $A^2 \Sigma_g^+$ in full range of internuclear distances. \section{Molecular ion H{\small e}$_2^+$ as the example} \subsection{Introduction} Theoretical studies of He$_2^+$ have been carried out for many years since the pioneering work by L.~Pauling~\cite{LP:1933}. Already in that paper it was found out explicitly that the ground state PEC exhibits a well-pronounced minimum indicating the existence of the molecular ion He$_2^+$. This observation by Pauling was confirmed later in subsequent theoretical studies (see e.g.~\cite{AH:1991,XPG:2005} and references therein). However, despite the fast development of numerical methods and computer power an accurate description of the potential energy curves for the He$_2^+$ is still running. Only recently, the PEC for the ground state was presented with absolute accuracy of $0.05$ cm$^{-1}$~\cite{TPA:2012} in domain $R \in [0.9, 100.]$\,a.u. in a form of mesh of the size 0.1\,a.u. for small and 1\,a.u. for large internuclear distances. Minimum of the potential well was localized at $R_{eq}=2.042$~a.u. ~\cite{TPA:2012}. The same time we are unaware about studies for $R < 0.9$\,a.u. It was found the ground state electronic term for $R > 0.9$\,a.u. is very smooth curve without any irregularities. Above-mentioned accuracy but for excited states has not been yet achieved to the best of the present author's knowledge. Note that for the first excited state $A^2 \Sigma_g^+$ it was found some irregularity in PEC at distances smaller than equilibrium one, $R < R_{eq}$ (see \cite{AH:1991} and references therein) due to level quasi-crossing(s), meanwhile the Van-der-Waals minimum occurs at large distances $R_{eq} \sim 9$\,a.u. The influence of that irregularity of the potential curve to rovibrational spectra needs to be investigated. Finally, the knowledge of accurate analytic expressions for the PEC allows us to calculate easily the rotational and vibrational states by solving the Schr\"odinger equation for the nuclear motion with analytic potential. Atomic units are used throughout although the energy is given in Rydbergs. \subsection{The Energy Gap $\Delta E$} Let us start considering the behavior of the energy gap $\Delta E$ between the excited state $A^2 \Sigma_g^+$ and the ground state $X^2 \Sigma_u^+$, $$\Delta E=E_{A^2 \Sigma_g^+}-E_{X^2 \Sigma_u^+}\ .$$ Following Bingel \cite{WAB:1959} for small internuclear distances $R\rightarrow 0$, the behavior is given by \begin{equation} \label{eq10} \Delta E = \delta_0 + 0\cdot R + O(R^2)\,, \end{equation} where \begin{displaymath} \delta_0=E^{{\rm Be}^+}_{2^1P_{1/2}}- E^{{\rm Be}^+}_{2^1S_{1/2}}, \end{displaymath} is the difference between the (rounded) energies of the Beryllium ion Be${}^+$~\cite{PP:2008}, \begin{eqnarray} \label{Ebely} E^{{\rm Be}^+}_{2^1S_{1/2}} & = & -28.649\,526\,{\rm Ry}\ , \\ E^{{\rm Be}^+}_{2^1P_{1/2}} & = & -28.358\,666\,{\rm Ry}\ .\nonumber \end{eqnarray} As for large internuclear distances $R\rightarrow \infty$, the energy gap $\Delta E$ is given by~\cite{Chang:1995,S:2001} \begin{equation} \label{eq11} \Delta E = R^{1/2} e^{-\alpha_0 R}\left[\epsilon_0+\frac{\epsilon_1}{R} +\frac{\epsilon_2}{R^2}+\cdots\right], \end{equation} where $\alpha_0 = 1.344$, $\epsilon_0=6.608\,573$, $\epsilon_1=2.296\,763$ and $\epsilon_2=0.252\,798$. Now, we look for an expression that interpolates the expansions \re{eq10} and \re{eq11}. In order to do that, a new variable is introduced \begin{displaymath} r=\sqrt{R}\ , \end{displaymath} and at the same time the parameters $\epsilon_1$ and $\epsilon_2$ are released, which gives more flexibility to the approximation. The two-point Pad\'e-type approximation is given by \begin{equation} e^{-1.344\, r^2}\, \mbox{Pade}[N+1/N](r)\ . \end{equation} Explicitly, taking $N=11$, \begin{equation} \label{eq13} \Delta E_{\{2,1\}}\ =\ e^{-\alpha_0 r^2}\frac{\delta_0\ +\ \alpha_0 \delta_0r^2\ +\ \sum_{i=2}^{5} a_i r^{2i}\ +\ \epsilon_0 r^{12}}{1+b_1 r^7\ +\ b_2r^9\ +\ r^{11}}\ . \end{equation} After making fit with the numerical results of~\cite{XPG:2005} the six free parameters take values \begin{eqnarray*} a_2 = -123.748\ , && b_1 = -1.15654\ ,\\ a_3 = \phantom{-} 214.186\ , && b_2 = \phantom{-} 4.04014\ ,\\ a_4 = -108.275\ , && \\ a_5 = \phantom{-} 46.8906\ . && \end{eqnarray*} The asymptotic behavior of the expression for $\Delta E_{\{2,1\}}$~\re{eq13} reproduces exactly the first two terms for small internuclear distances $R\rightarrow 0$ \re{eq10}, $n_0=2$, and one term for large internuclear distances $R\rightarrow \infty$ \re{eq11}, $n_{\infty}=1$. Comparison between the fit~\re{eq13} and the numerical results~\cite{XPG:2005} is presented in Fig.~\ref{fe0ed}. \begin{figure}[h!] \begin{center} \includegraphics[width=10cm]{fe0dE.eps} \caption{Fits for the mean energy $E_0$ (dotted line) from \re{eq7} and the energy gap $\Delta E$ (solid line) from \re{eq13}. Points represent the numerical results~\cite{XPG:2005}.} \label{fe0ed} \end{center} \end{figure} \subsection{The mean energy $E_0$} The dissociation energy for the ground $X^2 \Sigma_u^+$ and the first excited state $A^2 \Sigma_g^+$ at small internuclear distances $R\rightarrow 0$ is given by \begin{eqnarray} \label{eq1} \tilde E_{X^2 \Sigma_u^+}^{(0)} &=& \frac{2Z^2}{R} + (E^{{\rm Be}^+}_{2^1S_{1/2}} + E_{\infty})+ 0\cdot R + O(R^2)\nonumber \ ,\\ \tilde E_{A^2 \Sigma_g^+}^{(0)} &=& \frac{2Z^2}{R} + (E^{{\rm Be}^+}_{2^1P_{1/2}} + E_{\infty})+ 0\cdot R + O(R^2) \ , \end{eqnarray} where $E^{{\rm Be}^+}_{2^1S_{1/2}}$ and $E^{{\rm Be}^+}_{2^1P_{1/2}}$ are given by~\re{Ebely} and \[ E_{\infty}= E_{\rm He} + E_{{\rm He^+}} =-(5.807\,449\,+\, 4.000\,000)\, {\rm Ry}=-9.807\,449\,{\rm Ry}\ , \] is the asymptotic energy of the diatomic molecule He$_2^+$. The mean energy $E_0$ \begin{equation} E_0 = \frac{\tilde E_{X^2 \Sigma_u^+}^{(0)} +\tilde E_{A^2 \Sigma_g^+}^{(0)}}{2}\ , \end{equation} at small internuclear distances $R\rightarrow 0$, see \re{eq1}, \begin{equation} \label{eq5} E_0 = \frac{2Z^2}{R} + C_0 + 0\cdot R + O(R^2)\ , \end{equation} where $C_0 = (E^{{\rm Be}^+}_{S}+E^{{\rm Be}^+}_{P}+2E_{\infty})/2$. On the other hand, at large internuclear distances $R\rightarrow \infty$, the expansion of $E_0$, obtained from the asymptotic expressions of the energy of the ground and first excited states, \begin{equation} \label{erinf} \tilde E_{X^2 \Sigma_u^+ / A^2 \Sigma_g^+}^{(\infty)}\ =\ - \frac{C_4}{R^4}\ -\ \frac{C_6}{R^6}\ +\ \cdots \mp \frac{1}{2} \Delta E\ , \end{equation} has the form \begin{equation} \label{eq6} E_{0}\ =\ - \frac{C_4}{R^4}\ -\ \frac{C_6}{R^6}\ +\ \cdots\ , \end{equation} and~\cite{XPG:2005} \begin{eqnarray*} C_4 &\ =\ & 1.382874\ ,\\ C_6 &\ =\ & 3.193540\ . \end{eqnarray*} In order to interpolate the two asymptotic limits~\re{eq5} and \re{eq6} we use two-point Pad\'e approximation $$\frac{1}{R}\mbox{Pade}[N/N+3]_{\{3,3\}}\ ,$$ where the first three terms of the expansions at small and large distances should be reproduced exactly. Explicitly, \begin{equation} \label{eq7} E_{0 _{\{3,3\}}}\ =\ \frac{8+\sum_{i=1}^{5} a_iR^{i}-C_4R^6 }{R(1+\alpha_1\,R+\alpha_2\,R^2+\sum_{i=3}^{6} b_i\,R^{i}-\alpha_7\,R^7-\alpha_8\,R^8+R^9)}\ , \end{equation} where \begin{eqnarray} \alpha_1 & = & (a_1-C_0)/8\ ,\\ \alpha_2 & = & (8 a_2-a_1 C_0 + C_0^2)/64\ ,\nonumber \\ \alpha_7 & = & (C_6+a_4)/C_4\ ,\nonumber \\ \alpha_8 & = & a_5/C_4\ .\nonumber \end{eqnarray} These constraints guarantee the first three terms in each expansion~\re{eq5} and \re{eq6} are reproduced exactly. The 9 free parameters are fixed by fitting with the numerical results of~\cite{XPG:2005}, \begin{eqnarray*} a_1=& 471.867\ , & b_3= 103.213\ ,\\ a_2=&-706.524\ , & b_4=-515.786\ ,\\ a_3=& 474.091\ , & b_5= 623.091\ ,\\ a_4=&-148.695\ , & b_6=-350.333\ ,\\ a_5=&21.8549\ . & \end{eqnarray*} Comparison between fit~\re{eq7} and the numerical results~\cite{XPG:2005} is illustrated on Fig.~\ref{fe0ed}. \subsection{Potential Energy Curves} Explicit analytic expressions for mean energy $E_0$~\re{eq7} and energy gap $\Delta E$~\re{eq13} allow us to recover the potential energy curves for the ground $X^2 \Sigma_u^+$ and first excited $A^2 \Sigma_g^+$ states, \begin{equation} \label{eq14} E_{X^2 \Sigma_u^+ / A^2 \Sigma_g^+} = E_0 \mp \frac{1}{2}\Delta E \quad . \end{equation} In general, this approximation reproduces 3-4 s.d. for the total energy for the ground state and first excited state in the whole domain in $R$ when comparing with results of~\cite{XPG:2005} as shown in Table~\ref{tt1}, except for domain $0.5 \leq R \leq 1.5$\,a.u. for the first excited state PEC $A^2 \Sigma_g^+$, where the deviation is significant. The minimum of the ground state $X^2 \Sigma_u^+$ electronic term calculated by taking the derivative of~\re{eq14} and putting it equal to zero gives $E_t = -0.181\,64$~Ry at $R_{eq}=2.041$~a.u. For comparison {\it ab initio} calculations \cite{XPG:2005} gives $E_t = -0.181\,76$~Ry at $R_{eq}=2.043$~a.u. \cite{XPG:2005} while the most accurate numerical result leads to $E_t = -0.181\,84$~Ry at $R_{eq}=2.042$~a.u. ~\cite{TPA:2012}. Crossing of the potential curve~\re{eq14} with the horizontal line $E=0$ occurs at $R_0=1.4083$\,a.u. in agreement with \cite{XPG:2005} where $1.4 < R_0 < 1.5$\,a.u. The Van-der-Waals minimum for the state $A^2 \Sigma_g^+$ is located at $R=8.741$~a.u. being equal to $E_t = -0.000\,158$~Ry~\cite{XPG:2005} while ~\re{eq14} predicts $R = 8.362$~a.u. with $E_t=-0.000\,198$~Ry. The same time our fit ~\re{eq14} predicts the crossing point $R_0=7.1066$\,a.u. in agreement with \cite{XPG:2005}: $6.7 < R_0 < 7.4$\,a.u. Even though the simple analytic approximation~\re{eq14} predicts reasonably correct the position and the depth of the minima for both states, comparison with results by \cite{AH:1991} reveals a significant deviation at $R <1.5$~a.u. for the excited state $A^2 \Sigma_g^+$ as can be seen in Table~\ref{tt1} as well as a small deviation for the ground state $X^2 \Sigma_u^+$, see ~\cite{TPA:2012}. PEC for the state $A^2 \Sigma_g^+$ displays irregularity at small $0.5 \lesssim R \lesssim 1.5$\,a.u. which can be attributed to a quasi-crossing with the next $\Sigma_g$ excited state. Interestingly, the patterns of irregularity presented in \cite{AH:1991} and in (24) are similar qualitatively. Since, the irregularity occurs for energies much above the threshold energy $E(He) + E(He^+)$, it should not bring much influence to the rovibrational spectra. Surprisingly, for $X^2 \Sigma_u^+$ state fit \re{eq14} also predicts a certain irregularity in the domain of $0.9 \lesssim R \lesssim 1.5$\,a.u. as it can be seen in Fig.~\ref{fpotC}: Numerical data from \cite{TPA:2012} deviate from our analytic curve as well as one from \cite{AH:1991} in this domain. In this domain our curve is based on perturbative expansion of the energy at small $R$, which usually does not know about singularities related to the quasi-crossings unlike the convergent expansion at large $R$. Hence, this deviation can be attributed to quasi-crossings but situated far away from real $R$ axis. Since it is relatively small and is situated far above the threshold energy we do not expect much influence to the rovibrational spectra. Subsequent calculations confirm this prediction, see below. It can be checked that the asymptotic expansions of fit \re{eq14} for the ground state $X^2 \Sigma_u^+$~\re{eq14} at $R \rightarrow 0$, \begin{equation} \label{eq15-1} E_{X^2 \Sigma_u^+ } \ =\ \frac{8}{R}\ -\ 18.842078\ +\ 0 \cdot R\ +\ \cdots\\ \end{equation} and at $R \rightarrow \infty$ \begin{equation} \label{eq15-2} E_{X^2 \Sigma_u^+ }\ =\ -\ \frac{1.382874}{R^4}\ -\ \frac{3.19354}{R^6}\ +\ \ldots \ - \ e^{-1.344\,R}\,R^{1/2}\left[3.304287\ +\ \frac{10.095520}{R}\ +\ \cdots\right] \ . \end{equation} As for the excited state $A^2 \Sigma_g^+$ the asymptotic behavior of~\re{eq14} are \begin{eqnarray} \label{eq16} E_{A^2 \Sigma_g^+ } &=& \frac{8}{R}\ -\ 18.551218\ +\ 0\cdot R\ +\ \cdots\\ E_{A^2 \Sigma_g^+ } &=& \ -\frac{1.382874}{R^4}\ -\ \frac{3.193540}{R^6} \cdots +\ e^{-1.344\,R}\,R^{1/2}\left[3.304287\ +\ \frac{10.095520}{R}\ +\ \cdots\right] \ .\nonumber \end{eqnarray} For both states these expansions are in agreement with the asymptotic behavior (cf.~\re{eq1} and~\re{erinf}). \begin{table} \caption{Energy of the ground $X ^2\Sigma_u^+$ and the excited state $A ^2\Sigma_g^+$ of the molecular ion He$_2^+$ obtained using approximation~\re{eq14}. The second and third lines display the results of~\cite{XPG:2005} and~\cite{TPA:2012}, respectively. For $R=1.0$\,a.u. the second line result is from~\cite{AH:1991}.} \begin{center} \resizebox{6.0cm}{!}{ \begin{tabular}{lll | cll} \hline\hline $R$ & $X ^2\Sigma_u^+$ & $A ^2\Sigma_g^+$&$R$&$X ^2\Sigma_u^+$&$A ^2\Sigma_g^+$\\ \hline 1.0 \hspace{0.2cm} & 0.78489 & 2.72578 & \hspace{0.1cm} 2.65 \hspace{0.2cm} & -0.13865 & 0.21102 \\ & 0.66628 & 1.59046 & & -0.13846 & 0.210960 \\ & 0.66537 & & & & \\ 1.1 & 0.44602 & &2.9 & -0.11365 & 0.14549 \\ & 0.42043 & & & -0.11358 & 0.145398 \\ & 0.42014 & & & -0.11360 & \\ 1.2 & 0.23859 & &3.5 & -0.06425 & 0.06189 \\ & 0.23910 & & & -0.06454 & 0.061922 \\ & 0.23884 & & & -0.06456 & \\ 1.3 & 0.10222 & &4.1 & -0.03416 & 0.02682 \\ & 0.10544 & & & -0.0342 & 0.027232 \\ & 0.10521 & & & -0.03420 & \\ 1.4 & 0.00662 & &5.3 & -0.00927 & 0.00462 \\ & 0.00787 & & & -0.00892 & 0.003852 \\ & 0.00766 & & & -0.00891 & \\ 1.5 & -0.06220 & 1.36915 &6.3 & -0.00319 & 0.00077 \\ & -0.06218 & 1.36908 & & -0.00296 & 0.00072 \\ & -0.06236 & & & -0.00296 & \\ 1.7 & -0.14419 & 0.97461 &6.9 & -0.00174 & 0.00011 \\ & -0.14416 & 0.974356 & & -0.0016 & 0.000210 \\ & -0.14431 & & & -0.00159 & \\ 1.8 & -0.16507 & 0.82239 &9.3 & -0.00025 &-0.00017 \\ & -0.16496 & 0.822274 & & -0.00024 &-0.000148 \\ & -0.16508 & & & -0.00023 & \\ 1.9 & -0.17662 & 0.69514 &9.6 & -0.00021 &-0.00015 \\ & -0.17656 & 0.695202 & & -0.000198 &-0.00014 \\ & -0.17668 & & & -0.000198 & \\ 2.0 & -0.18126 & 0.58884 &10.0 & -0.00017 &-0.00014 \\ & -0.18132 & 0.588948 & & -0.00016 &-0.000126 \\ & -0.18143 & & & -0.000160 & \\ 2.1 & -0.18092 & 0.49991 &10.5 & -0.00013 &-0.00012 \\ & -0.18106 & 0.499976 & & -0.000126 &-0.000108 \\ & -0.18115 & & & -0.000126 & \\ 2.2 & -0.17703 & 0.42534 &11.0 & -0.00011 &-0.000098 \\ & -0.17716 & 0.425350 & & -0.000102 &-0.000092 \\ & -0.17723 & & & -0.0001015 & \\ 2.4 & -0.16260 & 0.30989 &12.0 & -0.00007 &-0.00007 \\ & -0.16254 & 0.309830 & & -0.00007 &-0.000066 \\ & -0.16260 & & & -0.000069 & \\ \hline\hline \end{tabular}} \end{center} \label{tt1} \end{table} \begin{figure}[h!] \begin{center} \includegraphics[scale=1.5]{fpotC.eps} \caption{Potential energy curves obtained from~\re{eq14} (marked by red and dark blue) compared with numerical results (marked by dots) from \cite{XPG:2005} (red and blue), \cite{TPA:2012} (empty). Curves indicated as $\Sigma_{g,u}$ (light blue and yellow) taken from~\cite{AH:1991}}. \label{fpotC} \end{center} \end{figure} \subsection{Rovibrational States} In the Born-Oppenheimer approximation the rovibrational states are calculated by solving the reduced one-dimensional Schr\"odinger equation for the nuclear motion \begin{equation} \label{eq17} \left[-\frac{1}{\mu}\frac{d^2}{dR^2}\ +\ \frac{L(L+1)}{\mu R^2}\ +\ V(R)\right]\phi(R)\ =\ E_{\nu L} \phi(R)\ , \end{equation} where $\mu = M_n/2=3647.149771$ is the reduced mass of two $\alpha$ particles, $\nu$ and $L$ are the vibrational and rotational quantum numbers, respectively: any state will be marked as ($\nu$,$L$). Usually, the equation \re{eq17} is solved numerically with the potential $V(R)$ also defined numerically at some discrete sequence of points in $R$. In our case the potential $V(R)$ is given by some analytic expressions for the potential energy curves~\re{eq14} (together with the expressions \re{eq13} and \re{eq7}). In this case the Lagrange-mesh method~\cite{DB:2015} can be used in its generality. As for the results it can be immediately seen that the PEC for the ground state $X^2 \Sigma_u^+$ \re{eq14} supports 24 vibrational states $(\nu,0)$ and 59 pure rotational states $(0,L)$. Hence, $L_{max} = 58$. In total, we found 825 rovibrational states $(\nu,L)$, five less than the 830 states presented in~\cite{TPA:2012}. It is worth mentioning that the highest $\nu = 23$ vibrational levels at $L=0,1,2,3$ were obtained in \cite{TPA:2012} only when the non-adiabatic correction is included into the PEC. In our case \re{eq14} the level $(0,23)$ is found without taking into account the non-adiabatic correction. All found states are presented in a histogram in Fig.~\ref{hepH}. Making careful comparison of our results with those in~\cite{TPA:2012} one can see that they agree in 3-4 s.d. \begin{figure}[h!] \begin{center} \includegraphics[scale=1.5]{rovib1sH.eps} \caption{He${}_2^+$ dimer: rovibrational states supported by the ground state $X^2\Sigma_u^+$ as a function of the angular momentum $L$, $L_{max} = 58$. In total, there are 825 rovibrational states. The extra 5 states, indicated in red, reported in~\cite{TPA:2012}, for some of them non-adiabatic corrections included.} \label{hepH} \end{center} \end{figure} Applying the same procedure for the PEC of the first excited state $A ^2\Sigma_g^+$, our results point out the presence of 9 rovibrational states, the same number of states as found in~\cite{XPG:2005} (see Table~\ref{rovibEx}). The energies are very small being the order $10^{-5} - 10^{-5}$\,Ry. Even though all our results are stable inside of the Lagrange-mesh method, they fall beyond our precision. It is worth mentioning that we predict the rotational state $(0,5)$, which is not found in \cite{XPG:2005}, while in \cite{XPG:2005} it is predicted vibrational state $(2,0)$ which is not seen in our calculations. \begin{table} \caption{Molecular ion He$_2^+$: Rovibrational energies $E_{(\nu,L)} \times 10^{-5}$~Ry for the state $A ^2\Sigma_g^+$ in approximation~\re{eq14}. The second line displays results from~\cite{XPG:2005}.} \begin{center} \begin{tabular}{c|lll} \hline \hline \ $L$ \ &\ $\nu=0$ \ &\ $\nu=1$ \ &\ $\nu=2$ \\ \hline 0\ &\ -9.74 \ &\ -1.17 \ &\ \\ &\ -7.3762\ &\ -0.7194 \ &\ -0.0003 \\ 1\ &\ -9.12 \ &\ -0.90 \ &\ \\ &\ -6.8212\ &\ -0.5074 \ &\ \\ 2\ &\ -7.90 \ &\ -0.42 \ &\ \\ &\ -5.7289 \ &\ -0.1357 \ &\ \\ 3\ &\ -6.11 \ &\ &\ \\ &\ -4.1381 \ &\ &\ \\ 4\ &\ -3.81 \ &\ &\ \\ &\ -2.1200 \ &\ &\ \\ 5\ &\ -1.11 \ &\ &\ \\ \hline\hline \end{tabular} \end{center} \label{rovibEx} \end{table} \subsection{He${}_2^+$: Conclusions} Inside of the Born-Oppenheimer approximation by using two-point Pad\'e approximants, analytic expressions for the potential energy curves in all range of internuclear distances $R$ are constructed for both the ground $X^2\Sigma_u^+$ and the first excited $A ^2\Sigma_g^+$ states for the diatomic molecular ion He$_2^+$. In general, the obtained analytic curves reproduce known numerical results with an accuracy of 3-4 s.d. in the whole domain in $R$. For small internuclear distances $0.5 < R < 1.5$~a.u., possibly due to the quasi-crossing (situated in complex $R$ sufficiently close to the real $R$ axis) between the excited stated $A ^2\Sigma_g^+$ with the next $\Sigma_g$ excited state the potential energy curve gets inaccurate in this domain of internuclear distances. It leads to a certain loss of accuracy in the spectra of rovibrational states situated in Van-der-Waals minimum, but it does not change the number of rovibrational states, which is equal to nine. All these states are very weakly-bound. In the case of ground state $X^2\Sigma_u^+$ the predicted potential curve through analytic approximation \re{eq14} (with the expressions \re{eq13} and \re{eq7} as ingredients) in the same domain $0.9 < R < 1.5$~a.u. differs from numerical results but insignificantly \cite{TPA:2012}. It indicates to the existence of quasi-crossings at complex $R$ plane situated far away from the real axis. This deviation does not make significant change in description of the spectra of rovibrational states obtained with 3-4 s.d. in accuracy. Note the predicted minima by the approximations \re{eq14} differ from the numerical results by $\sim 0.1\%$ and $\sim 25\%$ for the ground $X^2\Sigma_u^+$ and the first excited $A^2\Sigma_g^+$, respectively. The obtained analytic expressions for the PEC allow us to solve the differential equation for the nuclear motion using the Lagrange-mesh method with an accuracy of 3-4 s.d. The ground state curve $X^2\Sigma_u^+$ can keep 825 rotational and vibrational states, which 5 states less only than the 830 reported in the literature, all missing states are weakly-bound and for some of them non-adiabatic corrections are taken into account. For the excited state curve $A ^2\Sigma_g^+$ the predicted rotational and vibrational states are beyond of the BO approximation and various corrections should be taken into account. The calculated rovibrational states $(\nu,L)$ due to the analytic knowledge of PEC \re{eq14}, allow us to explore radiative transitions between those states as it was done in~\cite{OB:2012,OT:2016}. Up to our knowledge, radiative transitions for the molecular ion He$_2^+$ have not been considered before. It will be done elsewhere. \section{Molecular ion L\lowercase{i}H} As an example of the application of our approach to heteronuclear diatomic molecules, let us consider the ground state $X^1 \Sigma^+$ of the LiH molecule. At small internuclear distances $R\rightarrow 0$ the dissociation energy $\tilde{E}$ is given by \begin{equation} \label{lihR0} \tilde E_{X^1 \Sigma^+} = \frac{2Z_1Z_2}{R} +\epsilon_0+ 0\cdot R + O(R^2) \ , \end{equation} where $\epsilon_0= E^{\rm Be} + |E_{\infty}|$. The energy of the united atom is $E^{{\rm Be}} =-29.33474$~Ry~\cite{MAA:1991} and \begin{displaymath} E_{\infty}= E_{\rm H} + E_{{\rm Li}} =-(1.000\,000\,+\, 14.956\,120)\, {\rm Ry}=-15.956\,120\,{\rm Ry}\ , \end{displaymath} is the so-called asymptotic energy~\cite{PP:2006}. For large internuclear distances $R\rightarrow \infty$, the energy is given by~\cite{JMCB:2015} \begin{equation} \label{lihRl} \tilde E_{X^1 \Sigma^+} = -\frac{c_6}{R^6} -\frac{c_8}{R^8}+\cdots, \end{equation} where $c_6 = 133.182$ is the Van-der-Waals constant. Let us take two-point Pad\'e-type approximation given by \begin{equation*} \frac{1}{R}\, \mbox{Pade}[N/N+5](R)\ , \end{equation*} see (\ref{PadeN}), choose $N=4$, \begin{equation} \label{LiHV} \tilde E_{X^1 \Sigma^+\, _{\{3,2\}}} = \frac{6+a_1 R+ a_2 R^2+a_3 R^3+c_6 R^4} {R(1+\alpha_1 R+\alpha_2 R^2+\sum_{i=3}^{7}b_i R^i +\alpha_3 R^8+R^9)}\ , \end{equation} and impose three constraints \begin{eqnarray} \label{paramLiHV} \alpha_1&=&(a_1-\epsilon_0)/6\ ,\\ \alpha_2&=&(6 a_2-a_1 \epsilon_0+\epsilon_0^2)/36\ ,\nonumber\\ \alpha_3&=&a_3/c_6\ ,\nonumber \end{eqnarray} which guarantee that the expansions of $\tilde E_{X^1 \Sigma^+\, _{\{3,2\}}}$~\re{LiHV} reproduces exactly the first three terms for small internuclear distances, $R^{-1}$, $R^0$ and $R^1$ in~\re{lihR0} and two terms for large internuclear distances, $R^{-6}$ and $R^{-7}$, in~\re{lihRl}. Making fit of data~\cite{TPA:2011} with ~\re{LiHV} we find eight free parameters \begin{eqnarray*} a_1 = \phantom{-} 44744.6\ , && b_4 = \phantom{-} 2179.81\ ,\\ a_2 = -28086.2\ , && b_5 = \phantom{-} 727.108\ ,\\ a_3 = \phantom{-} 2614.80\ , && b_6 = -592.941\ ,\\ b_3 = -6739.29\ , && b_7 = \phantom{-} 158.198\ . \end{eqnarray*} Comparison of fit~\re{LiHV} with the numerical results~\cite{TPA:2011} is presented in Table~\ref{ttELiH} and illustrated by Fig.\ref{potLiH}. Note that fit reproduces with high accuracy the equilibrium distance $R_e=3.015$\,a.u., see e.g.\cite{TPA:2011} and vicinity. Fit predicts that the potential curve vanishes, $E(R_0)=0$, at $R_0=1.8954$\,a.u., while for larger $R > R_0$ PEC gets negative. \begin{table} \caption{ LiH molecule: PEC for the ground state $X^1 \Sigma^+$ the obtained using approximation~\re{LiHV}. Third column displays the results of~\cite{TPA:2011} (rounded).} \begin{center} \resizebox{3.2cm}{!}{ \begin{tabular}{lrr} \hline\hline $R$ &\ fit\re{LiHV}\ &\ \cite{TPA:2011} \\ \hline 1.8 & 0.05099\ &\ 0.05100\\ 1.9 & -0.00221\ &\ -0.00221\\ 2.0 & -0.04539\ &\ -0.04540\\ 2.2 & -0.10809\ &\ -0.10810\\ 2.4 & -0.14732\ &\ -0.14730\\ 2.6 & -0.17013\ &\ -0.17011\\ 2.8 & -0.18151\ &\ -0.18151\\ 3.0 & -0.18495\ &\ -0.18496\\ 3.015 & -0.18497\ &\ -0.18497\\ 3.1 & -0.18450 & -0.18451\\ 3.2 & -0.18293 & -0.18294\\ 3.4 & -0.17719 & -0.17719\\ 3.6 & -0.16897 & -0.16896\\ 3.8 & -0.15916 & -0.15915\\ 4.0 & -0.14840 & -0.14838\\ 4.2 & -0.13714 & -0.13713\\ 4.4 & -0.12573 & -0.12572\\ 4.6 & -0.11440 & -0.11441\\ 4.8 & -0.10335 & -0.10335\\ 5.0 & -0.09269 & -0.09269\\ 5.5 & -0.06832 & -0.06831\\ 6.0 & -0.04784 & -0.04783\\ 6.5 & -0.03172 & -0.03171\\ 7.0 & -0.01997 & -0.01998\\ 7.5 & -0.01209 & -0.01211\\ 8.0 & -0.00716 & -0.00719\\ 8.5 & -0.00422 & -0.00424\\ 9.0 & -0.00251 & -0.00250\\ 9.5 & -0.00152 & -0.00148\\ 10.0 & -0.00094 & -0.00089\\ 11.0 & -0.00039 & -0.00033\\ 12.0 & -0.00018 & -0.00013\\ 13.0 & -0.00009 & -0.00006\\ 14.0 & -0.00005 & -0.00003\\ \hline\hline \end{tabular}} \end{center} \label{ttELiH} \end{table} \begin{figure}[h!] \begin{center} \includegraphics[width=10cm]{fpotLiH.eps} \caption{PEC for the diatomic molecule LiH from~\re{LiHV} (solid line). Points represent data from~\cite{TPA:2011}.} \label{potLiH} \end{center} \end{figure} \subsection{Rotational and vibrational states} By solving the Schr\"odinger equation for the nuclear motion~\re{eq17} with the analytic potential~\re{LiHV}, the rovibrational spectra $E_{(\nu,L)}$ can be obtained. The reduced mass of the molecule is calculated using $m_{{}^7 {\rm Li}} = 12786.393$ and $m_{\rm H} = 1836.15267221$. Following the calculations carried out in Lagrange mesh method the ground state $X ^2\Sigma_u^+$ of the dimer ${}^7$LiH supports 24 vibrational states $E_{(\nu,0)}$. Making comparison the spectra of vibrational states $E_{(\nu,0)}$ with one presented in~\cite{TPA:2011} one can see that not less than 4 figures are reproduced for $\nu \le 7$ as it can be seen in Table~\ref{tvib}. However, for $\nu \ >\ 7$ the accuracy is reduced to 3 figures and for some values of $\nu$ even to 2 figures. We have to note that the state ${(23,0)}$, is not reported in~\cite{TPA:2011}. Following our calculations this should be weakly-bound state, hence, it might be the artifact of the loss of accuracy in fitting PEC by using (\ref{LiHV}) with parameters (\ref{paramLiHV}) or numerical solution of the Schr\"odinger equation in~\cite{TPA:2011}. \begin{table} \caption{Molecule LiH: Vibrational states $E_{(v,L=0)}$ in the ground state $X ^2\Sigma_u^+$. For sake of convenience the 3rd column displays energies in cm$^{-1}$ (1 Hartree = 219\,474.631\,363 cm$^{-1}$). 4th column displays the results from in~\cite{TPA:2011}.} \begin{center} \resizebox{6.0cm}{!}{ \begin{tabular}{rrrr} \hline\hline $v$ & $E_{(v,0)}$[Ry] &\ \ $E_{(v,0)}$[cm$^{-1}$]& \cite{TPA:2011} \\ \hline 0 & -0.17861\ & -19600. &-19600.57\\ 1 & -0.16621\ & -18240. &-18240.34\\ 2 & -0.15423& -16925. &-16924.98\\ 3 & -0.14264& -15653. &-15653.58\\ 4 & -0.13145& -14425. &-14425.30\\ 5 & -0.12064& -13238. &-13239.37\\ 6 & -0.11021& -12094. &-12095.19\\ 7 & -0.10015& -10990. &-10992.19\\ 8 & -0.09047& -9927.7 & -9930.00\\ 9 & -0.08116& -8905.9 & -8908.41\\ 10 & -0.07222& -7924.9 & -7927.45\\ 11 & -0.06365& -6984.9 & -6987.43\\ 12 & -0.05546& -6086.4 & -6088.98\\ 13 & -0.04767& -5230.6 & -5233.15\\ 14 & -0.04027& -4419.1 & -4421.65\\ 15 & -0.03330& -3654.3 & -3656.91\\ 16 & -0.02679& -2939.7 & -2942.32\\ 17 & -0.02078& -2279.9 & -2282.68\\ 18 & -0.01532& -1681.4 & -1684.62\\ 19 & -0.01050& -1152.7 & -1156.46\\ 20 & -0.00643& -705.37 & -708.64\\ 21 & -0.00323& -354.58 & -357.53\\ 22 & -0.00108& -118.73 & -118.21\\ 23 & -0.00011& -11.65 & -- \\ \hline\hline \end{tabular}} \end{center} \label{tvib} \end{table} In total, the ground state $X^2\Sigma_u^+$ of the diatomic molecule ${}^7$LiH supports 906 rovibrational states $E_{(\nu,L)}$. These are presented by histogram in Fig.~\ref{LiHhyst} with $L_{max}=61$. In~\cite{SSW:2012}, the total of 901 rovibrational states is reported. The five extra states $E_{(23,0)}$, $E_{(23,1)}$, $E_{(23,2)}$, $E_{(23,3)}$ and $E_{(21,13)}$, which we found, are indicated in red in Fig.~\ref{LiHhyst}, they are weakly bound; they might be a result of the loss of accuracy. \begin{figure}[h!] \begin{center} \includegraphics[width=10cm]{rovibLiH.eps} \caption{Diatomic molecule LiH: 906 rovibrational bound states in the ground state $X ^2\Sigma_u^+$ as a function of the angular momentum $L$. Weakly bound states are indicated by red.} \label{LiHhyst} \end{center} \end{figure} \section{Conclusions} In studying the electronic Hamiltonian for diatomic molecule the domain of small and large internuclear distances can be explored in sufficiently easy manner without performing massive numerical calculations. Straightforward interpolation between these two domains by using (generalized) two-point Pade approximations, involving a few points in $R$ of order 1\,a.u. around minimum of the PEC found via {\it ab initio} calculations, leads to amazingly accurate analytic formulas for potential curves. In this formalism hetero- $(A+B)$ and homo-nuclear $(A+A)$ dimers are conceptually different: latter ones contain exponentially small terms at large distances due to tunneling between two identical Coulomb wells in addition to multipole expansion. It results to the general formula for the approximating the potential curve, \begin{equation} \label{general} V(R)\ =\ \mbox{Pade}(n/m) (R)\ +\ \delta_{Z_A,Z_B} D_0(R)\ e^{-S_0(R)}\mbox{Pade}(p/q) (R)\ . \end{equation} where $n,m,p,q$ are integers, see below, $D_0, S_0$ as well as numbers $(p,q)$ depend on the dimer under investigation. It excludes the existence of universal formulas for the potential curves of dimers for the {\it entire} domain of $R$, which is in agreement with conclusions drawn in the book by Goodisman \cite{Goodis:1961}, but proclaimed opposite in \cite{XG:2005}. However, if we consider hetero-nuclear dimers the second term in (\ref{general}) disappears, we arrive at (\ref{Pade}) \begin{equation*} \label{hetero-ion} V_{ion}(R)\ =\ \frac{Z_{A} Z_{B}}{R}\ \mbox{Pade}(N / N+3)\ , \end{equation*} for charged dimer and at (\ref{PadeN}) \begin{equation*} \label{hetero-neutro} V_{neutral}(R)\ =\ \frac{Z_{A} Z_{B}}{R}\ \mbox{Pade}(N / N+5)\ , \end{equation*} for neutral one. These formulas look universal, they can be used for study of any hetero-nuclear dimer. The approach was illustrated by study PECs for He${}_2^+$ and ${}^7$LiH dimers. With sufficiently high accuracy the rovibrational spectra is described for both dimers. Radiative transitions will be studied elsewhere. Approximate analytic expressions for PEC allow to simplify studies of the contribution of potential (electronic) curves, which describes two atom interactions (or saying differently two-body interactions), to polyatomic potential surfaces. It will be studied elsewhere. There are appeared an interesting possibility to construct the approximate eigenfunctions of the nuclear Hamiltonian, where PEC plays the role of potential. \section{Acknowledgements} The research is partially supported by CONACyT grant A1-S-17364 and DGAPA grant IN113819 (Mexico). H.O.P. is grateful to Instituto de Ciencias Nucleares, UNAM, Mexico for kind hospitality where the present study was initiated and concluded.
{ "timestamp": "2020-10-02T02:04:03", "yymm": "1904", "arxiv_id": "1904.06614", "language": "en", "url": "https://arxiv.org/abs/1904.06614" }
\section{Introduction} Near-infrared (NIR) spectroscopy focuses on interaction of near-infrared radiation with matter and is an important analytical technique for detection and recognition of chemical substances based on vibrational modes of their molecular constituents in pharmaceutical analysis, food quality determination, non-destructive analysis of biological materials to name a few \cite{Cen2007TheoryQuality,Manley2014Near-infraredMaterials,Jamrogiewicz2012ApplicationTechnology}. However, molecular overtone bands lying in the NIR spectral region are forbidden in harmonic oscillator approximations \cite{Katiyi2018SiNear-Infrared,karabchevsky2018tuning}. Such bands arise only from the anharmonicity of molecular vibrations which is rather weak \cite{Katiyi2018SiNear-Infrared} leading to the overtone bands with the absorption cross-section of an order of magnitude smaller than the fundamental modes of the same degree of freedom. Here we explore for the first time the mechanism of local field enhancement in molecular overtones. The local field enhancement can be realized with plasmonic materials by means of collective oscillations of free electrons in form of extended surface plasmon-polariton (SPP) in thin metal films \cite{Maier2007Plasmonics:2007.,Klimov2014Nanoplasmonics,karabchevsky2009theoretical, karabchevsky2011fast,karabchevsky2011nanoprecision,karabchevsky2015transmittance} or localized surface plasmon resonance (LSPR) in plasmonic nanoantennas \cite{Karabchevsky2016TuningNanoparticles,maslovski2018purcell,simovski2015circuit,galutin2017invisibility}. Enhancement and localization of electromagnetic fields in the close proximity of nanoantennas depend on their material, size, shape and the surrounding media \cite{Karabchevsky2016TuningNanoparticles,Dadadzhanov2018VibrationalNanoantennas}. While exploring the influence of extended surface plasmon on absorption by molecular overtones, we showed that 100 times enhancement can be achieved \cite{Karabchevsky2016StrongPolariton} in guided wave configuration. This enhancement was observed when the absorption band of the molecular vibration N-H was detuned from the plasmonic resonance. In ref. \cite{Shih2016SimultaneousDisks}, authors explored overtones absorption effect with porous gold nanodiscs and ascribed the achieved enhancement to the molecules that occupy hot-spots in the structures. Despite this experimental observation of surface enhanced near-infrared absorption (SENIRA) of molecular overtones with plasmonic nanoantennas, this effect was not explored theoretically. In this work we theoretically explore yet unclear possibility to enhance absorption by molecular overtone transitions in the near-field of plasmonic nanoantennas such as gold nanorods (GNRs) due to the combination of localized plasmon resonance and lightning rod effect \cite{Li2015plasmon}. \section{Theoretical model} Fig. \ref{fig:Fig1} shows the system we study. Weakly absorbing medium, described by the complex permittivity of N-Methylaniline molecule, encapsulates a gold nanorod and nanoellipsoid in a homogeneous shell-like manner. The incident beam is directed perpendicular to the gold nanoparticles as indicated by vector \textit{\textbf{k} }and polarized along the gold nanoparticles. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figure1.pdf} \caption{Schematics of systems with gold nanorods (left) studied in numerical simulations and nanoellipsoids (right) used in the analytical model. The shells are made of N-Methylaniline (NMA). $L$ and $R$ are semi-major and semi-minor axises of gold nanoparticles respectively, while $t$ is the thickness of molecular shells. The incident wave is polarized along the rod.} \label{fig:Fig1} \end{figure} We study the contribution of GNR parameters in effect of SENIRA by molecular overtones. For this we built an analytical model of a confocal ellipsoidal core-shell nanoparticle. In the framework of quasi-static approximation, we express the absorption, scattering, and extinction cross-sections through the particle polarizability \cite{Bohren2008AbsorptionParticles,Kelly2003TheEnvironment}: \begin{equation} \alpha=\frac{v\left(\varepsilon_{2}-\varepsilon_{m}\right)\left[\varepsilon_{2}+\left(\varepsilon_{1}-\varepsilon_{2}\right)\left(S^{(1)}-f S^{(2)}\right)\right]+f \varepsilon_{2}\left(\varepsilon_{1}-\varepsilon_{2}\right)}{\left(\left[\varepsilon_{2}+\left(\varepsilon_{1}-\varepsilon_{2}\right)\left(S^{(1)}-f S^{(2)}\right)\right]\left[\varepsilon_{m}+\left(\varepsilon_{2}-\varepsilon_{m}\right) S^{(2)}\right]+f S^{(2)} \varepsilon_{2}\left(\varepsilon_{1}-\varepsilon_{2}\right)\right)} \end{equation} where \textit{$S^i$} ($i$=1,2) are the geometrical factors of the core and the shell in the direction of the polarization (Fig.\ref{fig:Fig1}); \textit{$\epsilon_1$}, \textit{$\epsilon_{2}$} ,\textit{$\epsilon_m$} are the frequency-dependent dielectric permittivity function of the gold core, the molecular shell and surrounding media, correspondently; \textit{v} is the full volume of the nanoparticle with the shell and \textit{f} is the ratio of the inner core volume to $v$. As the input parameters we consider the core and shell semi-axises and the frequency-dependent dielectric permittivities of the metal core, the shell and the surrounding medium, which was considered to be air. The N-Methylaniline (NMA) was chosen as a representative probe-molecule example of an organic molecule that possesses overtone bands in the NIR spectral range \cite{Katiyi2017FigureSpectroscopy,Karabchevsky2016GiantChip,karabchevsky2018tuning}. The absorption bands at wavelengths of 1494 nm and 1676 nm are associated with the first overtones of N-H and C-H stretching modes. These bands are accompanied by the anomalous dispersion regions as it follows from the Kramers-Kronig relations and presented in Figure 7d in Ref. \cite{Katiyi2017FigureSpectroscopy}. \section{Results and discussion } First, we analyzed how the LSPR position depends on the analyte shell thickness, $t$. Since the enhanced near-field rapidly decays with the distance from the surface, effective interaction is possible here only at distances comparable to the nanoantenna dimensions. In addition, the aspect ratio of the nanoantenna should provide the resonant interaction between the longitudinal plasmon and an overtone excitation. Therefore, we choose the semi-minor axis of the gold nanoellipsoid as 5 nm, while varying the semi-major axis until the LSPR band overlaps with an overtone mode. For this, we calculated extinction cross-sections of gold nanoellipsoids covered by thin shells of NMA in the form of confocal ellipsoids. Fig. \ref{fig:Fig2}a shows the extinction cross-section of GNR as a function of the NMA shell thickness. The semi-major axis of the gold core is $L$ = 55.9 nm that leads to the exact resonance with the first overtone of N-H mode when the shell thickness is $t$ = 20 nm. Fig. \ref{fig:Fig2}b shows the same dependence when the semi-major axis of the gold core is $L$ = 68.1 nm that leads to the exact resonance with the first overtone of C-H mode when the shell thickness is $t$ = 20 nm. The long wavelength shift of plasmon bands as a function of the shell thickness $t$ is rather strong at small shell thicknesses for $t < 40$ nm but saturates at shell thicknesses larger than $ t > 40 $ nm. \begin{figure}[h] \centering \includegraphics[width=0.45\linewidth]{figure2a.png} \includegraphics[width=0.45\linewidth]{figure2b.png} \caption{Extinction cross-sections of gold nanoellipsoids with NMA shells of different thicknesses. \textbf{(a)} the semi-major axis of the gold core is $L$ = 55.9 nm \textbf{(b)} the semi-major axis of the gold core is $L$ = 68.1 nm. The semi-minor axis is $R$ = 5 nm in both cases. } \label{fig:Fig2} \end{figure} As proof-of-concept numerical simulations we built numerical model with COMSOL Mul- tiphysics 5.4 software and show the tuning of the plasmon bands of GNR with the NMA overtone bands. Fig. \ref{fig:Fig3} shows calculated extinction (ECS), absorption (ACS) and scattering (SCS) cross-section of gold nanorods with NMA shell for $L = 49.9$ nm (Fig. \ref{fig:Fig3}a) and for $L = 60.6$ nm (Fig. \ref{fig:Fig3}b). The nanorod diameter is 10 nm. We choose the length of GNR such that it overlaps with the overtone bands of N-H located at 1494 nm and C-H located at 1676 nm. Considering the results presented in Fig. \ref{fig:Fig3} one concludes that extinction is governed by absorption, while the scattering contribution is negligible. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{figure3.png} \caption{Extinction (\textit{brown}), absorption (\textit{red}) and scattering (\textit{pink}) cross-sections of gold nanorods with NMA shell. The nanorod diameter is set to 10 nm for \textbf{(a)} $L$ = 49.9 nm, \textbf{(b)} $L$ = 60.6 nm. The thickness of the NMA molecular shell is homogeneous and equals to $t$ = 20 nm. Extinction coefficient of NMA (\textit{blue}) is also shown for comparison. } \label{fig:Fig3} \end{figure} The advantage of using GNR becomes evident when the concept of differential extinction is employed \cite{Karabchevsky2016StrongPolariton}. Experimentally, the differential absorption can be realized by comparing the extinction cross-section of a GNR surrounded by the analyte shell with that of a GNR surrounded by a shell of non-absorbing material that mimic only the mean value of the analyte’s refractive index. Thus, the difference between cross-sections of absorbing and non-absorbing materials represents the influence of the analyte absorption and anomalous dispersion on the LSPR intensity and spectral position. On the other hand, it includes also the influence of the LSPR on the analyte absorption. Quantitatively, \textit{differential extinction}, DE, as \cite{Dadadzhanov2018VibrationalNanoantennas,Karabchevsky2016StrongPolariton}: \begin{equation} DE=\sigma_{ext}^{NR/NMA}-\sigma_{ext}^{NR/NMA^{*}} \end{equation} where the first term $\sigma_{ext}^{NR/NMA}$ represents the extinction cross-section of GNR with NMA shell, while the second term $\sigma_{ext}^{NR/NMA^{*}}$ rrepresents the same value with NMA replaced by a dummy medium of constant dielectric permittivity. Fig.4 shows the calculated DE in spectral ranges of the first overtones of the N-H and C-H stretching modes. Interestingly, the sign of the wavelength dependent DE alternates in both cases. Fig. \ref{fig:Fig4} shows the calculated DE in spectral ranges of the first overtones of the N-H and C-H stretching modes. Interestingly, the sign of wavelength dependent DE alternates in both cases. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{figure4.png} \caption{Comparative analysis of differential extinction (DE) spectra of gold nanorod and nanoellipsoid with NMA shells: \textbf{(a)} with semi-major axises $L$ = 49.9 (nanorod) and $L$ = 55.9 nm (\textit{nanoellipsoid}); \textbf{(b)} $L$ = 60.6 (\textit{nanorod}) and $L$ = 68.1 nm (\textit{nanoellipsoid}). Blue curves correspond to numerically calculated results (\textit{nanorod}) while the red curves correspond to results obtained in the quasi-static approximation (\textit{nanoellipsoid}) } \label{fig:Fig4} \end{figure} We choose the aspect ratio of nanorods for the Fig.\ref{fig:Fig4} as $L/R$ = 9.98 and the aspect ratio of nanorods for the Fig.4b as $L/R$ = 12.12. Numerically calculated DEs (blue) are very well reproduced by the DEs obtained in the quasi-static approximation (red) (Eq.1) provided the aspect ratios of the GNRs are adjusted to match the plasmon resonance with the corresponding overtone ($L/R$ = 11.18 Fig. \ref{fig:Fig4}a and $L/R$ = 13.62 in Fig. \ref{fig:Fig4}b). It is important to note that in the case exact resonance between the plasmon in the GNR and the molecular overtone transition the sign of DE alternates. Contrary to that in the non-resonant case DE is strictly positive. It may be clearly seen in Fig. \ref{fig:Fig4} for C-H overtone transition at 1676 nm when the plasmon in the nanorod is tuned on 1494 nm (Fig. \ref{fig:Fig4}a) and for N-H overtone transition at 1494 nm when the plasmon in the nanorod is tuned on 1676 nm. To explore the role of GNRs in the detectivity enhancement of small amounts of NMA, extinction cross-sections of pure NMA shells (without GNR) were compared with the DE. Fig. \ref{fig:Fig5} shows the dependence of both values on the NMA shell thickness. When the resonance conditions are met, the DE values exceed the extinction cross-sections of the pure NMA shells by two orders of magnitude. In particular, the first overtone of N-H stretching mode located at 1494 nm is enhanced 114 times, while the first overtone of C-H stretching mode located at 1676 nm is enhanced 135 times. Fig. \ref{fig:Fig5}a shows variations in cross-sections vs. shell thickness for $\lambda=1494$ nm and the GNR semi-major axis is equal 49.9 nm. The resonance conditions for the plasmon excitation are met when the shell thickness is equal to 20 nm. Similarly, the optical properties in the form of ECS and ACS as function of the shell thickness are presented in Fig. \ref{fig:Fig5}b whereby $\lambda=1676$ nm with GNR semi-major axis $L$ = 60.6 nm. \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{figure5.png} \caption{\textbf{(a)} Optical cross-section as function of shell thickness of NMA: (1) differential extinction, (2) absorption cross-section of NMA shell encapsulating GNR, (3) extinction cross-section of the NMA shell without the GNR, and (4) the difference between ACSs of the GNR -with and -without the NMA shell GNR semi-major axis (a)(a)$L$ is 49.9 nm and the wavelength is set to 1494 nm. Absorption cross-section of the NMA shell when the GNR is absent is multiplied by 20. \textbf{(b)} same graphs as in subplot \textbf{(a)} for the case when the wavelength is set to 1676 nm, while $L$ is 60.6 nm.} \label{fig:Fig5} \end{figure} Enhanced absorption in the NMA shell due to the plasmon near-field is accompanied by reduced absorption in the GNR due to the screening effect \cite{Fofang2008PlexcitonicComplexes}. As a matter of fact, neither enhanced absorption in the shell nor the reduced absorption in the core can be observed in the far-field separately. However, they combine favorably leading to very large DE values. DE dependence on both: the NMA thickness and the incident radiation wavelength based on analytical model is presented in Fig. \ref{fig:Fig6}. In both plots the dark curve corresponds to the maxima of the LSPR. The vertical dashed lines mark the location of overtone bands, while the horizontal dashed lines correspond to NMA thickness of 20 nm that leads to coincidence of the LSPR in the chosen nanorod with the corresponding overtone band. Inspection of Fig.\ref{fig:Fig6} to the conclusion that the main features of DE already noted in particular cases presented in Fig. \ref{fig:Fig4} and Fig. \ref{fig:Fig5} namely, that the largest absolute value of DE is obtained at the resonance and the sign of this largest DE value is negative are confirmed. \begin{figure}[h] \centering \includegraphics[width=0.45\linewidth]{figure6a.png} \includegraphics[width=0.45\linewidth]{figure6b.png} \caption{\textbf{(a)} Differential extinction (DE) values are given as the functions of the NMA shell thickness and the incident radiation wavelength for GNR with semi-major axis of: $L$ = 55.9 nm \textbf{(a)} and $L$ = 68.1 nm \textbf{(b)}, respectively. The vertical dashed lines show the position of two overtone bands, while the horizontal dashed lines mark the shell thickness ($t$ = 20 nm) that leads to tuned plasmon resonance with the corresponding overtone band. The dark curve is drawn through the maxima of the plasmon resonances.} \label{fig:Fig6} \end{figure} \section{Conclusion} In conclusion, we explored for the first time the \textit{differential extinction} of forbidden molecular overtone transitions coupled to the localized surface plasmons. We showed that the differential extinction provides the SENIRA with two orders of magnitude enhancement. The nontrivial consequence of the simulations is that the enhanced absorption in the analyte is accompanied by the reduced absorption in the gold nanorods that overruns the absorption enhancement of the analyte and forms the signal that may be readily sensed in the far-field. Hence, local field enhancement of nanoparticle can result in the considerable sensitivity improvements of overtone spectroscopy in the NIR spectral range. \section*{Funding} This work was supported by the State of Israel-Innovation Authority, Ministry of Economy Grant No. 62045. The Ministry of Science and Higher Education of Russian Federation (Project 3.4903.2017/6.7). This work also was financially supported by the Government of Russian Federation, Grant 08-08. The research described was performed as part of a joint Ph.D. program between the BGU and ITMO University.
{ "timestamp": "2019-04-16T02:05:50", "yymm": "1904", "arxiv_id": "1904.06465", "language": "en", "url": "https://arxiv.org/abs/1904.06465" }
\section{Acknowledgements} The authors acknowledge fruitful discussions with T.~Lancaster and D.~Manevski. This work is partially based on experiments performed at the Swiss Muon Source S$\mu$S, Paul Scherrer Institute, Villigen, Switzerland. The financial support of the Slovenian Research Agency under programs No.~P1-0125 and No.~P1-0044 and project No.~J1-7259 is acknowledged. M.G. is grateful to EPSRC (UK) for financial support (grant No. EP/N024028/1). Q.M.Z was supported by the Ministry of Science and Technology of China (2016YFA0300504 \& 2017YFA0302904) and the NSF of China (11774419 \& 11474357). \section{Author contributions} A.Z. conceived, designed and supervised the project. M.G. and A.Z. performed the $\mu$SR measurements, with technical assistance of C.B., and analysed the data. R.\v{Z}. carried out the NRG calculations. M.G. developed the percolation-theory-based model for impurity-cluster spin. Y.L. and Q.M.Z. synthesized and characterized the sample. All authors discussed the results. A.Z. wrote the paper with feedback from all authors. \section{Competing interests} The authors declare no competing interests. \section{Additional information} Supplementary information is available for this paper.\\ Correspondence and requests for materials should be addressed to A.Z. \newpage \section{Methods} \subsection{$\mu$SR measurements} The $\mu$SR investigation was conducted on the LTF instrument at the Paul Scherrer Institute (PSI), Switzerland, on a $\sim$100\% deuterated powder sample from the same batch as the one used in our previous investigations \cite{gomilsek2017field, gomilsek2016instabilities, gomilsek2016muSR}. The sample was glued onto a silver sample holder with diluted GE Varnish to ensure good thermal conductivity. The measurements were performed between 21~mK and 10~K in various transverse (TF) and longitudinal (LF) applied fields $B\leqslant 1$~T with respect to the initial muon-spin polarization. In the TF setup the initial muon polarization was tilted by $\sim$45$^\circ$ away from the beam/field direction. Its component perpendicular to the field was detected by a set of detectors. The muon asymmetry $A(t)$, which is proportional to the muon polarization \cite{yaouanc2011muon}, was measured and modelled by \begin{eqnarray} \nonumber A_{TF}(t)&=A_1\cos\left(2\pi\nu_1 t +\phi \right){\text e}^{-\lambda_1 t}\\ &+A_2\cos\left(2\pi\nu_2 t +\phi \right ){\text e}^{-\lambda_2 t}, \end{eqnarray} where the total initial muon asymmetry was $A_1+A_2=0.199(12)$ and the ratio of the two components was $A_2/A_1=22\%$. A typical TF muon asymmetry curve is shown in the Supplementary Fig.~5 together with the corresponding Fourier transform. The fitting parameters are given in the Supplementary Methods. The signal $A_1$ was attributed to the muons stopping in the sample and the signal $A_2$ to the muons stopping in the diamagnetic silver sample holder. Since the background was diamagnetic, it decayed in time much more slowly than the intrinsic signal ($\lambda_2 \ll \lambda_1$) and its temperature-independent oscillation frequency $\nu_2$ could be used as the reference Larmor frequency. The Knight shift was calculated as \begin{equation} K=(\nu_1-\nu_2)/\nu_2. \end{equation} Combining the new $\mu$SR measurements with those previously published in Ref.~\onlinecite{gomilsek2016muSR} we could better constrain the \textit{a priori} unknown backgrounds in both experiments by comparing and simultaneously fitting old and new measurements at the same values of $T$ and $B$. After a careful self-consistent fit of all measured data, we found that the best estimates of low-field Knight-shift values were reduced by $\sim$15\% in relative terms when compared to the data published in Ref.~\onlinecite{gomilsek2016muSR}. Consequently, the magnetic coupling between the muon spin and the impurities, $a=31\;\text{mT}/\mu_B$, is reduced by the same amount compared to the previously published value. The new values should be taken as the definitive ones. In the LF setup, which was used to measure the longitudinal muon relaxation rate $\lambda$, the initial muon-spin polarization was along the beam/field direction and the muon asymmetry was measured along the same direction. The asymmetry was modelled with a stretched-exponential model \begin{eqnarray} A_{LF}(t)&=A_1{\text e}^{-(\lambda t)^\beta}+A_2, \end{eqnarray} where the total initial asymmetry was $A_1+A_2=0.239(4)$, with the same ratio $A_2/A_1=22\%$ as in the TF experiment. The same stretching exponent $\beta = 0.86(5)$ was found as in our previous LF studies \cite{gomilsek2016instabilities,gomilsek2016muSR}. A typical set of LF muon asymmetry curves at a selected temperature is shown in the Supplementary Fig.~5c. \subsection{NRG calculations} The numerical renormalization group (NRG) method \cite{wilson1975renormalization, zitko2009energy} was utilized to compute the impurity magnetization as a function of magnetic field and temperature. We used the discretisation parameter $\Lambda=2$, averaged over two discretisation grids, and kept a high number of states to ensure full convergence. The Wilson's thermodynamic definition \cite{wilson1975renormalization} of the Kondo temperature where $T_K \chi_\mathrm{imp}(T_K) = 0.07$ was applied. Here $\chi_\mathrm{imp}$ is the impurity contribution to the total-system's magnetic susceptibility. The reference calculations were performed for a spin-1/2 magnetic impurity with an isotropic Kondo exchange coupling to a bath of non-interacting spin-1/2 fermions with a constant density of states in a wide-band limit. The Hamiltonian of the reference model is \begin{equation} H=\sum_{k\sigma} \epsilon_k c^\dag_{k\sigma} c_{k\sigma} + J \mathbf{S} \cdot \mathbf{s}(\mathbf{r}=0) + g \mu_B S_z B \end{equation} where the operators $c_{k\sigma}$ describe itinerant particles with momentum $k$, spin $\sigma$, and energy $\epsilon_k$, $J$ is the Kondo exchange coupling, $\mathbf{S}$ is the quantum-mechanical spin-$1/2$ operator of the impurity, $\mathbf{s}(\mathbf{r}=0)$ is the spin density of the itinerant particles at the position of the impurity, $g$ is the impurity $g$-factor, $\mu_B$ the Bohr magneton, and $B$ the magnetic field. $T_K$ is given approximately by $T_K=D \sqrt{\rho J} \exp(-1/\rho J)$. Here $D$ is the half-bandwidth of the itinerant-particles band and $\rho=1/(2D)$ is the corresponding density of states. The reference model describes the intermediate and high temperature ranges very well (Fig.~\ref{fig2}a). At low temperatures we find a deviation, which is still within the experimental uncertainty (and hence perhaps not even statistically significant), but quite systematic. The slope of the calculated $M(B)/B$ curve is appreciably smaller than the slope of the measured $K(B)$ data points at 21~mK (Fig.~\ref{fig2}b). Therefore, additional sets of calculations were performed for various perturbations. \textit{Finite spinon bandwidth}: We first relaxed the assumption of the wide-band limit $T_K \ll D$, where $D$ is the half-bandwidth of the conduction band. We considered the Kondo impurity model with a flat band with $\rho J=0.7$ and $T_K/D \approx 0.5$, while the reference model had $\rho J=0.15$ and $T_K/D=4.2\times 10^{-4}$. For this rather extreme case, the agreement with experiment is slightly improved at low temperatures, but is worse at high temperatures (Supplementary Fig.~2a). \textit{Finite spinon $g$ factor}: In the wide-band limit, the Zeeman term of the itinerant particles can be neglected, since the impurity magnetic susceptibility $\chi_i \propto 1/T_K$ is much larger than the Pauli susceptibility of the band, $\chi_b \propto 1/D$. The case of a finite Zeeman term in the conduction band, which was included in the calculation as described in Ref.~\onlinecite{hock2013numerical}, with the spinon $g$ factor equal to the impurity $g$ factor was therefore considered for a still rather narrow band so that $T_K/D \approx 0.2$ (by taking $\rho J=0.5$). The Hamiltonian thus had an additional term $H'=g_b \mu_B B \sum_k (1/2) (c^\dag_{k\uparrow}c_{k\uparrow}-c^\dag_{k\downarrow}c_{k\downarrow})$. For this model, the slope of $M/B$~vs.~$B$ at low temperatures is actually reduced compared to the reference model (Supplementary Fig.~2b). \textit{Non-constant spinon DOS}: Since in a narrow-band situation the details of DOS in the continuum, $\rho(\epsilon)$, may play a larger role, we investigated some possible modifications for $\rho(0) J=0.5$, e.g. a large slope of DOS $\rho$ across the Fermi level. This was found to have little effect on the results. In Supplementary Fig.~2c we show the case of a triangular-shaped DOS with $\rho(\epsilon)=\rho(0) (D-\epsilon)/D$ for a narrow-band calculation with $T_K/D \approx 0.2$. Furthermore, potential scattering on the impurity site, a perturbation of the form $H'=V n(\mathbf{r}=0)$ where $V$ is the local potential and $n(\mathbf{r})$ the density of itinerant quasiparticles at the position of the impurity, was also found to play little role (Supplementary Fig.~2d). Only strong singularities in $\rho$ close to the Fermi level could potentially lead to significant effects. \textit{Kondo-coupling anisotropy}: The Kondo exchange coupling term of the Hamiltonian can be separated into the transverse and longitudinal parts, $H_K=J_\perp (S_x s_x+S_y s_y) + J_\| S_z s_z$. We studied both the limit of a dominant Ising Kondo exchange coupling ($\rho J_\|=1.5$ and $\rho J_\perp=0.075$, so that $J_\|/J_\perp=20$) and a dominant transverse coupling ($\rho J_\|=0.05$ and $\rho J_\perp=0.5$, so that $J_\|/J_\perp=0.1$), yielding a comparable Kondo temperature $T_K/D \approx 0.04$. Even for these unphysically large anisotropy ratios only small changes of $M/B$ curves were found (Supplementary Fig.~2e). This can be explained by the fact that the renormalization group flow toward the strong-coupling fixed point for the anisotropic Kondo model tends to restore the isotropy at low energy scales \cite{hewson1997kondo}. \textit{Partial gap opening in the spinon DOS}: At low temperatures and sizeable fields the quantum spin liquid in {Zn-brochantite} becomes partially gapped through a field-induced spinon-pairing instability \cite{gomilsek2017field}. The fraction of residual ungapped spinons is $f=0.3$ at $B=1$~T and decreases with increasing field. The behaviour for $B<1$~T is not known, therefore a linear extrapolation to low fields with $f=1$ at $B=0$ was made. As the simplest approximation to incorporate the partial gap opening in the calculations we assumed a modification of the conduction-band DOS proportional to the gapped fraction $1-f$ of spinons. For this gapped fraction we considered the case of a hard gap of width $\Delta$, and the case of a BCS-like redistribution of the spectral weight with $\Delta_\mathrm{BCS}=\Delta$. The partial gapping of spinons leads to reduced Kondo screening at very low temperatures due to a decreased spinon DOS at the Fermi level, hence the impurity magnetization is enhanced at low temperatures (Supplementary Fig.~2f). This effect increases in strength as the magnetic field is increased, thus inverting the slope of the $K(B)$ curve at low temperatures, which clearly contradicts the experiment. We note that extreme versions of the DOS reduction were assumed with a full suppression in the energy range $|\epsilon|<\Delta$. The actual gap is very likely to be softer. \textit{Finite impurity concentration}: The impurity concentration of 6--9\% in Zn-brochantite \cite{li2014gapless} is sufficiently large to prompt the question of possible impurity--impurity-coupling corrections to the dilute-limit results. The relevant model to describe this situation is the Kondo lattice model (KLM) with a random distribution of spins. The corresponding Hamiltonian consists of a tight-binding lattice of non-interacting itinerant fermions with density $n$ ($n=1$ is a half-filled band) and additional local moments ($S=1/2$ spins) coupled by the Kondo exchange coupling on a random subset of sites with concentration $p$. We calculated $M(B,T)$ using the real-space dynamical mean-field theory (RDMFT) with the NRG as the impurity solver. This is a conceptually simple (but numerically expensive) way to extend the single-impurity NRG results to the case of a finite impurity concentration. The calculations were performed on a $17 \times 17$ lattice with electron occupancy $n=0.5$ (quarter-filled band), for $\rho J=0.4$. For low impurity concentrations the curves are all very similar -- the curves for $p=1\%$ and $p=10\%$ actually overlap within the numerical errors (Supplementary Fig.~3). At $p=20\%$, some deviations are observable in the high-field range, while the low-field results are still little affected. We thus conclude that the 6--9\% impurity concentration in Zn-brochantite can be safely considered as corresponding to the ``dilute limit'' where the magnetization can be computed using the single-impurity model. This is also in line with heavy-fermion systems where the magnetic susceptibility and other quantities scale with the concentration of magnetic ions over a surprisingly large concentration range \cite{lin1987}. A magnetization plateau that is found in the dense-impurity limit at intermediate fields (Supplementary Fig.~3) is consistent with theory \cite{kusminskiy2008,golez2013} and corresponds to a transition from a paramagnetic state to a ferromagnetic half-metal state, indicating the existence of a well-defined partial gap. \textit{Renormalization of $g$ factors}: Finally, we mention two cases that lead to theoretical predictions in better agreement with experiment, but do so for parameter values that either differ from the experimentally established ones or are clearly unphysical. An example of the former case is the assumption of the impurity $g$ factor much in excess of $g=2.3$. In Supplementary Fig.~4a we show the calculation with $g=6.5$, which are in near-perfect agreement with experiment at all temperatures and fields. An example of the latter case is the assumption of a large negative value for the $g$ factor of the itinerant fermions. In Supplementary Fig.~4b we show the calculations for $g_b=-3$, which fit the low temperature results almost perfectly, but, however, deviate from experiment at high temperatures. \section{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
{ "timestamp": "2019-04-16T02:08:17", "yymm": "1904", "arxiv_id": "1904.06506", "language": "en", "url": "https://arxiv.org/abs/1904.06506" }
\section{Introduction} The concept of quantum turbulence began with the well-known experiment by Gorter and Mellink (GM)on heat exchange in superfluid helium and Feynman's theory, which explains the GM experiment on the basis of the dynamics of a set of chaotic quantized vortices, or vortex tangle \cite{Gorter1949},\cit {Feynman1955}. Since then, the concepts of quantum turbulence as a chaotic tangle of vortex filaments have expanded and appeared in many fields of physics, ranging from superfluids and cold atoms to heavy ions, and neutron stars (see e.g. INT Program INT-19-1a, URL http://www.int.washington.edu/PROGRAMS/19-1a/). The experimental studies of superfluid turbulence is mainly based on hydrodynamic methods. These include, for example, the study of the vortex tangle using the first and second sounds, the measurement of the temperature drop or the pressure field, the investigation of fluctuating velocity fields (via studying pressure fluctuation), etc. On the other hand, most of the known methods of creating superfluid turbulence also have the hydrodynamic origin. These are the generation of a vortex tangle by a flow (counterflow), by powerful sound fields, using oscillating devices etc. Therefore, knowledge how to describe hydrodynamic processes in superfluids in the presence of the vortex tangle is very actual and important problem. The article\ discusses the problem of construction of the coarse-grained hydrodynamics of turbulent flows of superfluids. In particular, I conduct a critical analysis of the use of the HVBK (Hall-Vinen-Bekarevich-Khalatnikov) method for the study of three-dimensional flows of superfluids. Indeed, this approach, which relates the vortex , line density $\mathcal{L}(r,t)$ with the coarse-garined vorticity $\nabla \times \mathbf{v}_{s}$ via the favor Feynman rule, was initially elaborated and is only suitable for rotating stationary cases. Meanwhile, at present, the use of this method for three-dimensional unsteady flows is a widespread practice. Sometime this is done without justification, sometimes they refer to the so-called vortex bundle structure of quantum turbulence. The conception of the vortex bundles forming the structure of quantum turbulence is also critically discussed in the paper. I also propose an alternative variant in which the vortex line density $\mathcal{L}(r,t)$ is not associated with $\nabla \times \mathbf{v _{s}$, but it is an independent and equipollent variable described by a separate equation. The structure of paper is following. The next, second section is devoted to general formulation of coarse-grained hydrodynamics of superfluids in presence of the vortex tangle. In the third Section the problem of the use of the HVBK methods for rotating superfluids and in the three -dimensional flows are discussed. An alternative variant of study of three-dimensional turbulent flow is described in fourth Section. \section{Coarse-grained Hydrodynamics of turbulent superfluids.} In presence of the vortex filaments the two-fluid hydrodynamics of the superfluid helium should be modernized and be represented as follows \begin{equation} \rho _{n}{\frac{\partial \mathbf{v}_{n}}{\partial t}}+\rho _{n}(\mathbf{v _{n}\cdot \nabla )\mathbf{v}_{n}=-{\frac{\rho _{n}}{\rho }}\nabla p_{n}-\rho _{s}S\nabla T+\mathbf{F}_{mf}+\eta \nabla ^{2}\mathbf{v}_{n}, \label{equa-Vn} \end{equation \begin{equation} \rho _{s}{\frac{\partial \mathbf{v}_{s}}{\partial t}}+\rho _{s}(\mathbf{v _{s}\cdot \nabla )\mathbf{v}_{s}=-{\frac{\rho _{s}}{\rho }}\nabla p_{s}+\rho _{s}S\nabla T-\mathbf{F}_{mf} \label{equa-Vs} \end{equation} We assume that the motion of both components is incompressible, $\nabla \cdot \mathbf{v}_{n}=0$, $\nabla \cdot \mathbf{v}_{s}=0$. and where $\mathbf v}_{n}$ and $\mathbf{v}_{s}$ are the coarse-grained velocity of the normal and superfluid component (averaged over a small volume $\mathcal{V}$), p_{n} $ and $p_{s}$ are the effective pressures acting on the normal and the superfluid component ($\nabla p_{n}=\nabla p+(\rho _{s}/2)\nabla v_{ns}^{2}$ and $\nabla p_{s}=\nabla p-(\rho _{n}/2)\nabla v_{ns}^{2}$), $p$ is the total pressure, $S$ is the entropy, $T$ is the absolute temperature, $\eta $ is the dynamic viscosity of the normal component, and $\mathbf{v}_{ns} \mathbf{v}_{n}-\mathbf{v}_{s}$.The effects of the vortices on the two components (normal and superfluid) are described by the friction force exerted by the superfluid component on the normal component $\mathbf{F}_{mf}$ .When these forces are averaged over all vortices inside the small volume \mathcal{V}$, then the following expression of $\mathbf{F}_{mf}$\ is obtained (see e.g. \cite{Schwarz1988}) \begin{equation} \mathbf{F}_{mf}=\mathcal{L}<\mathbf{f}_{MF}>=\alpha \rho _{s}\kappa \mathcal L}\left\langle \mathbf{s^{\prime }}\times \lbrack \mathbf{s^{\prime }}\times (\mathbf{v}_{ns}-\mathbf{v_{i}})]\right\rangle +\alpha ^{\prime }\rho _{s}\kappa \mathcal{L}\left\langle \mathbf{s^{\prime }}\times (\mathbf{v _{ns}-\mathbf{v_{i}})\right\rangle . \label{Fns-media} \end{equation In this equation $\mathbf{s^{\prime }(}\xi \mathbf{)}$ is the tangent vector along the vortex filaments $\mathbf{s(}\xi \mathbf{)}$, composing the vortex tangle, $\alpha ,\alpha ^{\prime }$ are temperature-dependent dimensionless mutual friction parameters, $\mathbf{v_{i}}$.is the self-induced velocity of the vortex filament. Quantity $\mathcal{L}$ is the vortex line density, averaging $\left\langle \cdot \right\rangle $ is performed over various configurations of vortex filaments. Equations (\ref{equa-Vn}) - (\re {Fns-media}) are coarse-grained equations, hence the inclusion of the effects of the vortex lines requires a high vortex line density per unit volume. These equations had been written and discussed for a long time, a classic variant, close to the stated above can be found in the book \cit {Donnelly1991}. In the written form these equations are common to any type of flow, such as counterflow, flow past obstacles, acoustic waves, etc. The main difference for different types of flows is the determination of the vortex line density $\mathcal{L}$ entering into the expression for the macroscopic mutual friction force (\ref{Fns-media}), and the choice of averaging method \left\langle \cdot \right\rangle $. Another question concerns the temperature range for applying equations (\re {equa-Vn}) - (\ref{Fns-media}), In general, the quantum turbulence is the chaotic dynamics of three strongly interacting nonlinear fields. These are the motion of the normal and superfluid component and stochastic evolution of a set of vortex filaments (vortex tangle).\textsl{\ }The type of interaction depends of the temperature $T$\ (via mutual friction). For large $T$\ the coupling is strong, both components are clumped and move together. As a result, we got almost the one-fluid hydrodynamics. The case of very low temperature is very interesting because it is ideal for testing the idea of whether the dynamics of discrete vortices (quantized vortex lines) can imitate classical turbulence. The study of superfluid turbulence for intermediate temperatures is not suitable for this purpose (due to the presence of a normal component). Quantum turbulence in superfluid helium (for intermediate temperatures) is rather the separate problem, not identical to classical turbulence. And it is precisely this case that requires the study of two coupled Navier-Stokes Equations. Thus, we can say that we study the intermediate temperature range. The above equations (\ref{equa-Vn}) - (\ref{Fns-media}) are a point of consensus among physicists. Disagreements begin with the question how to fulfil averaging and to treat variable $\mathcal{L}$. Here we discuss two main ways to perform these procedures. They are the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) model (see e.g., book by Khalatnikov, \cite{Khalatnikov1965} 1965) and the Hydrodynamics of superfluid turbulence model (Nemirovskii \& Lebedev, 1983 \cit {Nemirovskii1983},\cite{Nemirovskii1995}) \section{HVBK approach} \subsection{HVBK approach for rotating superfluids} The Hall-Weinen-Bekarevich-Khalatnikov (HVBK) model (see, for example, the book Khalatnikov \cite{Khalatnikov1965} became the basis for the mathematical formalism of the hydrodynamics of rotating superfluids. As is well known (see \cite{Feynman1955} , in a vessel rotating with an angular velocity $\Omega $, appears a regular array of vortex filaments with a density $n=2\Omega /\kappa $. Such a distribution of vortices creates an average coarse-grained superfluid velocity $\left\langle \mathbf{v _{s}\right\rangle $, which satisfies the condition of solid body rotation \left\langle \mathbf{v}_{s}\right\rangle =\mathbf{\Omega \times r}$. The vorticity filed $\mathbf{\omega }$ is $\mathbf{\omega }=2\mathbf{\Omega }$. Therefore, the density of the vortex filaments in this case can be related to the vorticity field by the following relation \begin{equation} \nabla \times \mathbf{v}_{s}=\kappa \mathcal{L} \label{rot via L} \end{equation} ~~~Due to the smallness of the quantum of circulation $\kappa $, even a relatively weak rotational speed produces a high density of vortex lines. Thus, it is possible to construct \ a coarse-grained hydrodynamics of hydrodynamic equations, which the average contribution of many individual vortex lines and incorporate their contribution to the macroscopic evolutionary equations for superfluid and normal He II velocities. Combining (\ref{Fns-media}) with (\ref{rot via L}) one get expression for average coarse-garined mutual friction $\mathbf{F}_{mf}^{(HVBK)}$ \begin{equation} \mathbf{F}_{mf}^{(HVBK)}=\rho _{s}\alpha {\hat{\omega}}\times \lbrack \mathbf{\omega }\times (\mathbf{v}-\tilde{\beta}\nabla \times \mathbf{\hat \omega}})]+\rho _{s}\alpha ^{\prime }\mathbf{\omega }\times (\mathbf{v} \tilde{\beta}\nabla \times \mathbf{\hat{\omega}}), \label{Fns-HVBK} \end{equation where $\mathbf{\omega }=\nabla \times \mathbf{v}_{s}$ is the averaged superfluid vorticity, $\hat{\omega}=\mathbf{\omega }/|\omega |$. \ Thus, the question of elimination of the vortex line density is resolved with the use of the Feynman rule, it allows to study various problems of coarse-grained dynamics of rotating supefluids (see, e.g. book be Sonin \cite{Sonin2016}). \subsection{HVBK approach for three -dimensional flows} The HVBK model is a fruitfull and elegant approach, however \emph{it is principally assigned for rotating superfluids}. Nevertheless this approach, which uses ansatz $\nabla \times \mathbf{v}_{s}=\kappa \mathcal{L}$ (\re {rot via L}),$\mathcal{\ }$and the force $\mathbf{F}_{mf}^{(HVBK)}$ (\re {Fns-HVBK}) is widely used for numerical and analytical studies of coarse-grained hydrodynamic problems of turbulent superfluid in three -dimensional situations. This approach seems to be unfounded. In my opinion, there is no way to apply it to three-dimensional hydrodynamics. Anticipating the objections of readers, I would like to discuss the usual arguments for using the HVBK method. Probably the only argument is the conviction that the vortex tangle consists of so-called vortex bundles, unifying many near parallel vortex filaments. There are few papers (see, e.g. \cite{Baggaley2012a}, \cite{Sasa2011}), where the authors claim of the existence of the bundles. In fact, they only demonstrated how, with the help of statistical analysis, one can get a small polarization (prevailing of one direction over the other) in the vortex tangle. But, firstly, it is just a statistical effect and, secondly, under no circumstances this small polarization allows to use the Feynman rule (\re {rot via L}), which is crucially needed in the HVBK equations. To use this ansatz, it is necessary that all filaments in the vortex tangle are involved in the rotation. But if the polarization is partial, then there is a lot of free (randomly orientated) vortex filaments that contribute to mutual friction and do not contribute to the Feynman rule (\ref{rot via L}). In addition, these chaotic lines interact with polarized lines, thereby destroying the polarization and, correspondingly, the quasi-bundle structure. There are other examples of observation of vortex bundles (in numerical works), when they are artificially prepared structure, or are initiated by eddies of normal component (see, e.g. \cite{Samuels1993},\cite{Samuels1993 ). However, there are works in the literature, in which it is stated that even if the vortex bundles are artificially created, they can be destroyed rather soon. For instance, G. Volovik \cite{Volovik2004}) have shown that at low temperatures, where the mutual friction is small, the existence of the bundles is impossible. They should melt, changing into a highly irregular structure. Other example is a series of numerical simulations by Kivotides \cite{Kivotides2011}, \cite{Kivotides2012},\cite{Kivotides2014},\cit {Kivotides2018}), who studied the exact (not HVBK) dynamics of quantum vortices in the turbulent flows (at finite temperature) and concluded "that the results do not show that a turbulent normal-fluid with a Kolmogorov energy spectrum induces superfluid vortex bundles in the superfluid".\textsl \ }In paper \cite{Kivotides2014} Kivotides reported about observation of clusters with weakly polarized vortex lines and associated Kolmogorov - type spectra $E(k)\propto k^{5/3}$. To some measure this is expected result since the appearance of the Kolmogorov spectrum requires formation of coherent structures. However, the partial polarization also prevents the use of closure $\nabla \times v_{s}=\kappa L$\ in the mutual friction, since ALL the lines contribute into friction, and partial polarization (if any) includes A SMALL FRACTION of the total vortex line density. \ In this regard, it seems appropriate to discuss question of relation of the vortex bundle arrangement and the Kolmogorov-type spectrum. For the uniform array of vortex filaments the coarse grained velocity field is \mathbf{v(r)=\Omega \times r}$. Accordingly the Fourier transform $\mathbf v(k)}$\ scales as $1/k^{-3}$. This implies that two-dimensional spectrum $E \mathbf{k})=$ $dE/d^{2}\mathbf{k}$\ should behave as $1/k^{-6}$\ \ and the isotropic spectrum $E(k)$\ depends on absolute value of the wave vector $k$ as $E(k)\varpropto 1/k^{5}$. Thus, we state that the uniform vortex bundles do not generate the Kolmogorov type spectra. It can be shown that the nonuniform vortex array with the distribution of density of vortex filaments $n(r)=\Delta N/\Delta r\varpropto 1/r^{-2/3}$\ does generate Kolmogorov type spectra $E(k)\varpropto 1/k^{5/3}$\ (see \cite{Kozik2009}, \cit {Nemirovskii2015a}). Of course, for very strong mutual friction vortex filaments can be completely trapped by the eddies of normal component and follow the dynamics of normal fluid. In fact, in this situation the coarse- grained hydrodynamics become the one-fluid dynamics, when both components move together. There are many physical mechanisms that result in the destruction of the regular vortex bundle structure. The apparent source of the destruction of the bundle structures is the various reconnections. Thus, as demonstrated in the paper by Kursa, Bajer \& Lipniacki, \cite{Kursa2011}, and in work by Kerr \cite{Kerr2011} even a single reconnection results in a cascade of vortex loops of various sizes being chaotically radiated from the reconnection point. Clearly these propagating loops collide with the lines composing bundles, triggering new reconnections, and developing an avalanche-like randomization. Some authors(see e.g. \cite{Alamri2008},\cite{Baggaley2012c}), based on their numerical results, claim that the vortex bundles are the robust structures with respect to reconnection of two adjacent bundles. This conclusion, however, concerns the situation when the different bundles have the same structure (N strands in each bundle)\textsl{. } In quantum fluids full reconnection between bundles carrying different numbers of threads is not possible for topological reasons, and the residual structure, analogous to the "bridging" in classical hydrodynamics, should accompany the collision of such bundles (see e.g. papers \cite{Melander1989}, \cite{Zabusky1989}, \cite{Kida1991}, \cite{Boratav1992}). The analog of classical "bridging" leads to the randomization and violation of the structure of the bundles and to the creation of vortex loops. Also, the long-range interaction between vortex filaments in the bundles and the "external" vortices also destroy regular array due to the action of tidal forces. As for the filaments inside bundle, the direct reconnection event for them is impossible, since for the reconnection the approaching vortices must be antiparallel. Reorganization of lines destroys the parallel array of vortex filaments. Similarly, the processes of emission and re-absorption of the ring by the vortices (the "anti-bottleneck", proposed by Svistunov \cit {Svistunov1995,Kozik2009}) should also lead to the fragmentation of the regular arrays and the appearance of chaotic loops. There are also experimental results\ which do questionable the idea of bundles. Thus, in experiments by Roche et al. \cite{Roche2007}, \ and by Bradley et al.\cite{Bradley2008},\ it was observed that the spectrum of the fluctuation of the VLD $\mathcal{L}$ is compatible with a $-5/3$ power law. This contradicts the idea of the bundles structure, since the spectrum of the vorticity (and, correspondingly, of the VLD $\mathcal{L}$ (via Eq. (\re {rot via L}))) should scale as $1/3$ power law, which,indeed was observed in paper by Baggaley \cite{Baggaley2012a}. One more objection to using HVBK approach is that the Feynman rule (\ref{rot via L}) is applicable only for stationary situations. It is not clear its validity for transient processes that take place in highly fluctuating turbulent flows. Resuming this subsection we would like to stress, that the bundle structure of the quasi-classical quantum turbulence, although clear and transparent, is not strongly confirmed . The regular vortex bundles, even if they spontaneously appear, are extremely unstable structures,which can be easily destroyed. \subsection{\textsl{\ }Where is it from?} This is a somewhat mysterious question: how did at all the HVBK method, designed to a rotational or two-dimensional case, become used for three-dimensional turbulent flows? Trying to find the origin I analyzed a large mass of literary sources. The most frequent references are to the papers by Sonin \cite{Sonin1987} and \cite{Hills1977a}. But these links are absolutely irrelevant, since the authors definitely wrote that they work with rotating helium. Probably, the one of the first papers in which the use of this method for three-dimensional turbulent flows is discussed is the work of Holm \cite{Holm2001}. It is interesting, however, that he started paper with the text "Recent experiments establish the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) equations as a leading model for describing superfluid Helium turbulence. See Nemirovskii and Fiszdon [1995] and Donnelly [1999] for authoritative reviews." But R. Donnelly \cite{Donnelly1999} discussed HVBK approach namely for rotating helium. As for my (with W. Fiszdon) paper \cite{Nemirovskii1995} on superfluid turbulence, then, firstly, there was no mention on HVBK theory at all, and, secondly, I generally opposed to this method to treat three-dimensional quantum turbulence. Thus, the origin of the idea of the using a pure rotational HVBK approximation for a three-dimensional turbulent flows is rather vague. Resuming this Section, I can state the HVBK ansatz $\nabla \times \mathbf{v _{s}=\kappa \mathcal{L}$ is in general unfounded in the three-dimensional case, therefore, works using this approach are questionable, and the corresponding results are not reliable. \section{Other ways to treat the vortex line density} The main attractive advantage in HVBK approach was to rid out of vortex line density $\mathcal{L}$ in equations of motions (\ref{equa-Vn}) - (\re {Fns-media}). It seems,however, that the question of "elimination" the vortex line density $\mathcal{L}(\mathbf{r},t)$ should be solved in a fundamentally different way. We have not to "eliminate" the quantity \mathcal{L}(\mathbf{r},t)$, but on the contrary, to include it into consideration as an independent and equipollent variable. Correspondingly we have to consider the problem, in which there are three independent variables - velocities $\mathbf{v}_{n}(\mathbf{r},t)$ and $\mathbf{v}_{s}(\mathbf{r ,t) $, and vortex line density $\mathcal{L}(\mathbf{r},t)$. We also can add the density field $\rho (\mathbf{r},t),$~as\ well as the entropy field $S \mathbf{r},t)$ if it is pertinent. In this way, however, we need an additional independent equation for the temporal and spacial evolution of quantity $\mathcal{L}(\mathbf{r},t)$. This is a particular task that requires a lot of efforts. Derivation of such an equation, certainly, depends on the type of flow, such as counterflow, co-flow, flow past objects, unsteady rotation, etc. In fact, so far the corresponding equation exists only for the case of counterflow,\ this is the famous Vinen equation (or some modernized versions of this equation). Although, there are some problems with this equation (see e.g. Sec. IV in \cite{Nemirovskii2018e}), it works well for hydrodynamic e.g. for acoustic or engineering problems. It is important to stress that the construction of a theory of the evolution of three fields ($\mathbf{v}_{n}$, $\mathbf{v}_{s}$ and $\mathcal{L}(\mathbf{r},t)$) is not an automatic addition of Vinen equation to the Landau two-fluid hydrodynamics. It is a more involved procedure, since all variables (energy, entropy, etc.) change in the presence of the vortex tangle. This self-consistent procedure was implemented in \cite{Nemirovskii1983}, it is called Hydrodynamics of Superfluid Turbulence (HST). This theory was successful, it explained a lot of experimental results on the nonlinear acoustics of the first and second sounds, the evolution of strong heat pulses, the formation of shock waves etc. It was also successfully applied to the problems of unsteady heat transfer and boiling of He II (\cite{Nemirovskii1995},\cite{Jou2005},\cite{Kondaurova2017}). These examples demonstrate that the way to treat the vortex line density \mathcal{L}(\mathbf{r},t)$ \ as an additional and equipollent variable seems to be productive and fruitful. Unfortunately, for other types of flows there is no theory describing temporal - spatial evolution of the vortex line density, and there is no ideas (similar to the Feynman qualitative scenario) how to obtain the according equation. Vinen's equation reflects the fact that the vortex line density $\mathcal{L}(\mathbf{r},t)$ grows due to the relative velocity \mathbf{v}_{n}-\mathbf{v}_{s}$ and attenuates, probably, due to the cascade like breaking down of vortex loops, described by Feynman \cite{Feynman1955}. That is good guideline how to develop an appropriate theory for any flow, involving, of course, some auxiliary speculations. Beside of introduction of the vortex line density into the coarse-grained hydrodynamics of superfluid turbulence, there was one more, crude and simplified way, which had been applied on the early stages of research on the superfluid turbulence. It was the use the Gorter - Mellink formula for mutual friction which immediately follows, from the Vinen theory \begin{equation} \mathbf{F}_{mf}\propto A(T)(v_{n}-v_{s})^{2}(\mathbf{v}_{n}-\mathbf{v}_{s}). \label{GM} \end{equation Here $A(T)$ is the Gorter - Mellink. In fact, in this formula it was used the ansatz\textsl{\ }$\mathcal{L}\propto (v_{n}-v_{s})^{2}$, well known in the theory of quantum turbulence The use of equation (\ref{GM}) also "resolves" the problem of elimination the vortex line density $\mathcal{L (r,t)$. \section{Conclusion} The two approaches how to investigate turbulent flow in superfluids - HVBK and \ HST have been described in the paper. In the first one, the vortex line density $\mathcal{L}(\mathbf{r},t)$, crucial for the whole dynamics is straightforwardly excluded from equations of motion (\ref{equa-Vn}) and (\re {equa-Vs}) with the use of \ Feynman rule (\ref{rot via L}). In the second approach, the variable $\mathcal{L}(r,t)$ is considered as an additional independent variable obeying the according evolution equation. In the present paper it had been argued that the HVBK ansatz $\nabla \times \mathbf{v}_{s}=\kappa \mathcal{L}$ is suitable only for rotating cases and fails in three-dimensional situations. The attempts to justify this procedure give rise to a whole scientific direction (trend), which asserts that the vortex tangle in quantum turbulence is composed of the so called vortex bundles containing a set of near parallel lines. In the paper, I put a number of arguments criticizing the conception of the vortex bundle structure. References to the fact that in some numerical works a partial polarization of vortex filaments had been observed cannot be considered as justification for using the ansatz $\nabla \times \mathbf{v}_{s}=\kappa \mathcal{L}$, since ALL the lines contribute into friction, and polarization (if any) includes a small fraction of the total vortex line density. Furthermore, it is of great concern that the concept of vortex bundles has gone beyond the coarse-grained hydrodynamics of superfluid turbulence and often serves as the basis for other (more subtle) aspects of the theory of quantum turbulence. This seems counterproductive, since after the pioneering works of Feynman, Vinen, Donnelly, Schwartz and others, it was customary to present a vortex tangle as a set of stochastic loops with rich and diverse dynamics. These loops are subject to large deformations (due to highly nonlinear dynamics), they reconnect with each other and with the wall, split and merge, creating a lot of daughter loops. This vision was observed in numerous numerical simulations (see e.g. \cite{Schwarz1988},\cite{Aarts1994} \cite{Tsubota2000},\cite{Berloff2002}, \cite{Kondaurova2008},\cit {Kivotides2014}). This, let's say, Feynman-Vinen model is very different. from the vortex bundle model, where almost the only possible dynamics of the vortex filaments is the Kelvin waves evolution along the lines composing the bundles. In summary, the use of ansatz $\nabla \times \mathbf{v}_{s}=\kappa \mathcal{ }$, for the closure procedure for coupled Navier-Stokes equations (\re {equa-Vn}) - (\ref{Fns-media}) in the 3D turbulent flow is not motivated and would lead to unreliable results. And the commonly used the vortex bundle model, which justifies the use of this method, is questionable and unfounded. Moreover, the vortex bundle concept disavows the real structure of the vortex tangle as a set of vortex loops, and prevents developing of an adequate theory. We assert that the introduction of an additional independent field $\mathcal{L}(r,t)$ to the classical two-fluid hydrodynamics of supefluids is the only correct way to construct the coarse-garined hydrodynamics of turbulent flows. \section{Acknowledgements.} The study on the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) approach was carried out under state contract with IT SB RAS (No. 17-117022850027-5), the study on the Hydrodynamics of Superfluid Turbulence (HST) method was financially supported by RFBR Russian Science Foundation (Project No. 18-08-00576).
{ "timestamp": "2019-08-09T05:04:57", "yymm": "1904", "arxiv_id": "1904.06634", "language": "en", "url": "https://arxiv.org/abs/1904.06634" }
\section{Introduction} Neutrinoless double beta ($0\nu\beta\beta$) decay is a key for physics beyond the Standard Model of elementary particles. If we observe this decay, then the neutrino behaves as a Majorana particle\cite{Majorana} and the decay process violates lepton number conservation. If the neutrino is Majorana type, extremely light neutrino mass is explained via the seesaw mechanism\cite{seesaw}, and may also explain the baryon asymmetry via leptogenesis\cite{leptogenesis}. The decay rate of $0\nu\beta\beta$ ($\left( T_{1/2}^{0\nu} \right)^{-1}$) is proportional to the square of the effective neutrino mass $\langle m_{\beta\beta} \rangle$ as follows, \begin{eqnarray} \left( T_{1/2}^{0\nu} \right)^{-1} = G^{0\nu} \left| M^{0\nu} \right|^2 \langle m_{\beta\beta} \rangle^2 \end{eqnarray} where $ \left| \langle m_{\beta\beta} \rangle \right| \equiv \left| \left| U_{e1}^{L}\right|^2 m_1 + \left| U_{e2}^{L}\right|^2 m_2 e^{i\phi_2} + \left| U_{e3}^{L}\right|^2 m_3 e^{i\phi_3} \right|$, $T_{1/2}^{0\nu}$ is the half-life, $G^{0\nu}$ is the phase space factor, $M^{0\nu}$ is the nuclear matrix element, $e^{i\phi_{2,3}}$ are Majorana CP phases, and $U^L_{ej}$ ($j$ = 1 - 3) is the neutrino mixing matrix. The event rate determines the mass scale of light neutrino mass. $0\nu\beta\beta$ emits two beta rays and the total energy corresponds to the Q-value of the double beta decaying nucleus. Usually the Q-values are in the region of environmental backgrounds caused by uranium-chain and thorium decay chains. The sensitivity to $\langle m_{\beta\beta} \rangle$ is proportional to the square root of the exposure time for a background-free case, but it will be reduced to the fourth root of the exposure time in background limited cases\cite{2to4square}. Therefore, in order to observe the $0\nu\beta\beta$ signal, we need many double beta decaying nuclei, a long live time, and a background-free environment or powerful background rejection methods to eliminate noise events. Recently there are four types of experiments used in $0\nu\beta\beta$ searches. The first type are high energy resolution detector ($\sim$0.1\%) using germanium detectors~\cite{GERDA, MAJORANA} or bolometers~\cite{CUORE}. These detectors can reduce backgrounds in the observed energy spectrum. The second type are tracking detectors. There the source and the detector are separated, thus it is hard to contain a large amount of nuclei, but the event pattern of 2$\beta$ decay can be identified~\cite{NEMO}. The third type are xenon TPC detectors~\cite{EXO,NEXT,PandaX,AXEL}. This type has both previous features partially, good energy resolution ($\sim$0.1\%; for gas, $\sim$3\%; for liquid) and high event pattern identification by TPC. The final type are liquid scintillator detectors~\cite{Zen400final}\cite{SNO+}. These detectors have poor energy resolution ($\sim$10\%) with no particle identification methods for $\beta/\gamma$. However, liquid scintillator detectors realize ultra low background environments for radiation from uranium, thorium, and other metals which are used in the detector or the vessel, and can contain a large amount of double beta decaying nuclei. Thus this type is one of the most sensitive detectors for $0\nu\beta\beta$ search. In this paper, we report the current status of liquid scintillator (LS) detector experiments, KamLAND-Zen and SNO+. \section{Liquid scintillator detector} \begin{figure}[htb] \centering \includegraphics[height=2.0in]{AllowedRegion2.png} \vspace{-5mm} \caption{Allowed region of effective neutrino mass as a function of the lightest neutrino mass.} \label{fig:alowedregion} \end{figure} Liquid scintillator experiments for $0\nu\beta\beta$ decay were originally developed for neutrino oscillation experiments at MeV energy region. KamLAND-Zen and SNO+ have similar designs. Inward looking photomultiplier tubes (PMTs) are set on the inner surface of $\sim$18m diameter stainless tank or structure and $\sim$1,000 tons of LS is stored in a 13 m diameter nylon/EVOH base balloon (KamLAND-Zen) or 12 m diameter acrylic sphere (SNO+). The emitted scintillation light caused by radiation in LS ($\alpha, \beta, \gamma$, etc.) are detected by PMTs. The vertex is reconstructed by hit timing and the energy is reconstructed by transparency corrected charge of PMTs. Liquid scintillator is purified by water extraction, distillation, nitrogen purge etc., and the contamination level for uranium and thorium can reach a sufficient level of $O(10^{-18})$ g/g. Thus the cleanness of the container for $\beta\beta$ decaying nuclei loaded liquid scintillator or a large self shielding distance from the surface of the container are the key elements for a high sensitivity search for $0\nu\beta\beta$ decay. Other possible backgrounds in liquid scintillator detectors are spallation products of carbon caused by cosmic ray muons, solar $^8$B neutrinos, and energy tail of $2\nu\beta\beta$ decay spectrum. If the detector site is very deep and the muon event rate is low, $^{10}$C background made by spallation is negligible. However if this rate is high, it has to be rejected, for example, by triple coincidence between muon, 2.2 MeV $\gamma$-ray from neutron capture by proton, and $^{10}$C decay ($\beta^{+}$, $\tau$ = 27.8 s, Q = 3.65 MeV). The solar $^8$B neutrino background is proportional to the volume and can not be rejected by event identification, thus a small active volume is desirable. Because of the poor energy resolution, the tail of high energy $2\nu\beta\beta$ decay extends to the region of interest of $0\nu\beta\beta$ search. Therefore high light yield, high transparency of LS is required to improve the energy resolution. Figure~\ref{fig:alowedregion} shows the allowed region of $0\nu\beta\beta$ decay calculated by neutrino oscillation parameters. Current ongoing projects search for ``degenerated mass'' region with $O(10^{1\rm - 2})$ kg $\beta\beta$ nuclei. In order to reach ``inverted hierarchy'' region including inverted mass hierarchy ($\nu_2 > \nu_1 > \nu_3$), degenerated and normal hierarchy ($\nu_3 > \nu_2 > \nu_1$), $O(10^3)$ kg $\beta\beta$ nuclei is needed. Liquid scintillator detectors which can contain large numbers of $\beta\beta$ nuclei are therefore one of the most sensitive. \section{SNO+} The SNO+ experiment plans to use tellurium loaded liquid scintillator (Te-LS) with hardware based on the SNO experiment. The liquid scintillator consists of Linear-Alkyl-Benzene and PPO(2g/L). 780 tons of liquid scintillator in a 6 m radius acrylic vessel contains 0.5\% $^{\rm nat}$Te by weight. The natural abundance of $^{130}$Te in $^{\rm nat}$Te is 34.1\%, thus this experiment does not use enriched material and 1,330 kg of $^{130}$Te can be loaded into the detector. The Q-value of $^{130}$Te is 2.527 MeV and this energy overlaps with environmental radiation from uranium and thorium as described above. While the acrylic vessel is not clean enough for the measurement, radiation from the vessel is rejected by using fiducialisation and self-shielding of Te-LS. From the current estimation, a 3.3 m radius fiducial volume (2.7 m self-shielding) makes the most sensitive condition. The detector position is $\sim$ 2,000 m below the ground level and it corresponds to 6,000 m water equivalent (m.w.e.). Due to the depth, cosmogenic muons come to the detector at the rate of only 70 events per day, and backgrounds from muon spallation products are negligible. Figure 2 shows expected background components and the energy spectrum. The main component of the background is the solar $^8$B neutrinos that are unavoidable and proportional to volume. The sensitivity is 1.9$\times$10$^{\rm 26}$ years at 90\% C.L. with 5 years of operations. \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=50mm]{BG_SNO.png} \end{center} \label{fig:SNOspec} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=50mm]{Spec_SNO.png} \end{center} \end{minipage} \vspace{-5mm} \caption{The left figure shows estimated background components of SNO+. The right plot shows expected energy spectrum of SNO+ with 5 years of operation, 0.5\% $\rm ^{nat}$Te and R = 3.3 m fiducial volume~\cite{SNO+Nu2018}.} \end{figure} The current stage of SNO+ is construction and test operations. There are three stages of the SNO+ $0\nu\beta\beta$ decay search: water phase, liquid scintillator phase, and tellurium loaded liquid scintillator phase. In order to investigate the external backgrounds, pure water was filled in the acrylic vessel and data acquisition was operated. The background level met their targets for future physics~\cite{MarkChen}. SNO+ already terminated the water phase and released the results of solar $^8$B neutrino measurements~\cite{SNO+solar}, and of the search for invisible modes of nucleon decay~\cite{SNO+Ndecay}. Distilled LS filling of the acrylic vessel for liquid scintillator phase started in October of 2018. After the LS filling and several months of operations, tellurium will be loaded in the liquid scintillator. Tellurium can be dissolved in LS in the form of a Te-butanediol complex. 3.8 tons of telluric acid was stored underground for more than 3 years in order to wait for the decay of long lived radioactivity made by cosmic ray muons and protons. Construction of a tellurium acid purification plant is underway underground. SNO+ future plan called phase II includes the following improvements: 1\% tellurium loading, high light yield and high transparency LS, detector upgrade with high quantum efficiency PMTs, concentrators replacement, inner bag, and $^{\rm 130}$Te enrichment. SNO+ plans to cover the inverted hierarchy region. \section{KamLAND-Zen} KamLAND-Zen is a $0\nu\beta\beta$ decay experiment using $^{136}\rm Xe$ loaded liquid scintillator in the KamLAND detector~\cite{KL}. KamLAND is located is at 1,000 m depth (2,700 m.w.e.) where the cosmogenic muon rate is $\sim$0.3 Hz. In order to restrict the muon spallation products and solar $^8$B neutrinos backgrounds which are proportional to the volume, xenon loaded liquid scintillator (Xe-LS) is located in a nylon mini-balloon surrounded by 1,000 tons of LS contained in a 13 m diameter outer balloon (see Figure \ref{fig:OneFig}). Xe-LS can contain xenon at almost 3\% by weight and isotopic abundance of $^{136}\rm Xe$ is enriched to 90.6\%. KamLAND has achieved a 10$^{-17\sim -18}$ g/g contamination level for $^{238}\rm U$ and $^{232}\rm Th$ in liquid scintillator\cite{solar}, thus its very clean container allows a high sensitivity search for $0\nu\beta\beta$ decay. The ``mini-balloon'' container for Xe-LS is made of 25 $\mu$m thickness nylon film and is suspended at the center of KamLAND as shown in Figure \ref{fig:TwoFig}. The nylon film has 99\% transparency and a contamination level of $\sim$2$\times$10$^{-12}$ g/g level for $^{238}\rm U$ and $^{232}\rm Th$ . To make the drop-shaped container, nylon films were cut for each part and connected by heat welding in a class-1 super clean room. \begin{figure}[htbp] \begin{minipage}{0.45\hsize} \begin{center} \includegraphics[width=35mm]{KamLAND-Zen-fig1.jpg} \end{center} \vspace{-6mm} \caption{Schematic view of KamLAND-Zen} \label{fig:OneFig} \end{minipage} \hspace{5mm} \begin{minipage}{0.45\hsize} \begin{center} \includegraphics[width=42mm]{miniballoon-film-connection.jpg} \end{center} \vspace{-6mm} \caption{Nylon film parts used in the mini-balloon} \label{fig:TwoFig} \end{minipage} \end{figure} \subsection{KamLAND-Zen 400} KamLAND-Zen 400 started data acquisition in October 2011 and terminated in October 2015, including a purification period from June 2012 to December 2013. In the first phase before purification (Phase-I), we found $^{110m}\rm Ag$ events in the region of interest for $0\nu\beta\beta$ decay (see Figure \ref{fig:ThreeFig})\cite{KLZen1st}. We suspect the impurities came from fall out of the Fukushima reactor accident. After Phase-I, we extracted xenon from LS and purified it by distillation and getter filtering. After the removal of xenon, LS was purified three times by distillation, and replaced by new one twice. Unfortunately, the inner surface of the mini-balloon was contaminated from mine air by a pump failure, thus limiting the effective fiducial volume in phase II. The energy spectrum of purified Xe-LS in Figure \ref{fig:FourFig} shows no $^{110m}\rm Ag$ peak. From the combined analysis with Phase-I and Phase-II data, a lower limit for the half life of $0\nu\beta\beta$ decay is $\rm T_{1/2} >1.07\times 10^{26}$ years at 90\% C.L corresponding to $\langle m_{\beta \beta} \rangle <$ 61$-$165 meV~\cite{Zen400final}. \begin{figure}[htbp] \begin{minipage}{0.45\hsize} \begin{center} \includegraphics[width=55mm]{phaseI-spectrum3.jpg} \end{center} \vspace{-8mm} \caption{Energy spectrum in Phase-I (R $<$ 1.35 m)} \label{fig:ThreeFig} \end{minipage} \hspace{5mm} \begin{minipage}{0.45\hsize} \begin{center} \includegraphics[width=55mm]{phaseII-spectrum.jpg} \end{center} \vspace{-8mm} \caption{Energy spectrum in latter period of Phase-II (R $<$ 1.0 m)} \label{fig:FourFig} \end{minipage} \end{figure} \subsection{KamLAND-Zen 800} Due to the $\gamma$-rays from surface contamination of the mini-balloon, the sensitivity was restricted in KamLAND-Zen 400. Therefore we started KamLAND-Zen 800 project with almost 750 kg xenon and cleaner mini-balloon. In order to make a cleaner mini-balloon, we applied a number of techniques: clean wear control, particle flow check, static-electricity control by ion generation devices and humidity control, using a film cover to protect the mini-balloon film, and the introduction of a semi-automatic welding machine. We used three clean wear layers: a clean inner suit, the first clean suit wearing in a class-1,000 clean room, and second clean suit changing in a class-1 super clean room. The mini-balloon was constructed in a separate super-clean room. Custom order nylon film is easily charged, and the static-electricity collects dusts including environmental radioactivities. Static-electricity is prevented by 65\% humidity in general, therefore a mist generation system was set just before the ULPA filter. We also applied protective nylon film covers for the mini-balloon nylon film. When we welded films and did leak check for welding lines, the mini-balloon films were protected from dust contamination by the cover films. For the KamLAND-Zen 400 mini-balloon, welding was done by a hand pressing machine. When we used this machine, a person had to keep their body on the nylon film, and dust from the clean suit or the person could drop on the film. Thus we introduced a semi-automatic welding machine to avoid dust drop and press weight differences by each person. The mini-balloon production was done by the following procedure: nylon film was washed by ultra pure water with ultra sonic cleaning to reject initial surface contamination, film cover setting for mini-balloon film, clipping to each part, film connection by welding, leak check by helium gas and helium detector, and repairing holes by glue. Installation of the mini-balloon to KamLAND was done on May 10, 2018 (see Figure \ref{fig:FiveFig}). For the install preparations in Kamioka site, we set up a class-50 level clean room at the top of the KamLAND detector. Due to the spherical shape of the detector and access point to inside the outer balloon being only 50 cm in diameter, we folded the mini-balloon keeping the shape using perforated teflon sheet and teflon tubes. We applied cover nylon films between the mini-balloon film and teflon sheets to avoid damage during the installation process. We installed the mini-balloon with heavier LS (+0.4\%) compared to the KamLAND LS density. After sinking of mini-balloon in KamLAND LS, teflon sheets and cover nylon films were removed and pulled up. After the installation, we filled slightly heavier LS (+0.015\%) without xenon, and the mini-balloon was expanded as shown in Figure \ref{fig:SixFig}. \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=35mm]{installation.jpg} \end{center} \caption{mini-balloon installation} \label{fig:FiveFig} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{Mini-balloon-in-KamLAND.jpg} \end{center} \caption{Expanded mini-balloon in KamLAND} \label{fig:SixFig} \end{minipage} \end{figure} We purified non-xenon-loaded LS in the mini-balloon by distillation after the installation because $^{232}\rm Th$ level was slightly high O(10$^{-15}$) g/g. After the purification, xenon dissolving to LS was performed from December 2018 to January 2019. KamLAND-Zen 800 data acquisition was started in January 2019. It is expected that after 3 years of data taking the inverted hierarchy region will be probed. \subsection{KamLAND2-Zen} Possible backgrounds for the future project of KamLAND2-Zen are $^{\rm 10}$C, $^{\rm 214}$Bi, solar $^8$B neutrinos, and $2\nu\beta\beta$. To reject these backgrounds, we have the following efforts: electronics upgrade for $^{\rm 10}$C rejection, scintillation balloon development~\cite{ScintiFilm} for $^{\rm 214}$Bi rejection, and particle ID by imaging devices for $^{\rm 10}$C and $^{\rm 214}$Bi rejection. Mini-balloon is located at almost 10 m depth in KamLAND, thus it has 1.8 atmosphere pressure. It means that more xenon can be dissolved by Henry's law. We are considering improvement of the xenon/LS ratio in the mini-balloon to restrict $^{\rm 10}$C and solar $^8$B neutrinos backgrounds which are proportional to volume. $2\nu\beta\beta$ background could be restricted by energy resolution improvement. In order to accomplish these goals, we have studied high light yield LS, high Q.E. PMTs with light collection mirrors. Currently some development activities are on going and we are preparing for budget requests. \section{Summary} Liquid scintillator based experiments have good sensitivities to search for Majorana neutrino mass in the inverted hierarchy region of $0\nu\beta\beta$ decay based on ultra low background environments. SNO+ experiment is ongoing with LS filling of the detector. After the filling and several month operations, tellurium will be introduced in LS and data acquisition will be started. The lower limit for the half life of $0\nu\beta\beta$ of $^{136}\rm Xe$ is $\rm T_{1/2} >1.07\times 10^{26}$ years at 90\% C.L. using the KamLAND-Zen 400 experiment, corresponding to $\langle m_{\beta \beta} \rangle <$ 61$-$165 meV. In the next phase, KamLAND-Zen 800 was started with significant improvements. Preparation for KamLAND2-Zen has begun. \bigskip \bigskip \begin{center} \begin{large I am grateful to Prof. Mark Chen who gave me the figures and informations about SNO+ experiment.
{ "timestamp": "2019-04-16T02:15:34", "yymm": "1904", "arxiv_id": "1904.06655", "language": "en", "url": "https://arxiv.org/abs/1904.06655" }
\section{Introduction} In adversarial settings, cybersecurity systems like intrusion detection systems or insider threat detectors routinely process enormous volumes of heterogeneous log data needed to perform detection and prevention functions. Tracking wide-ranging characteristics requires effective anomaly detection based on ensemble methods of individual detectors or methods that can handle potential feature interactions in near real-time. \begin{figure}[h] \center \includegraphics[width=1\columnwidth]{Schema.png} \caption{Anomaly detection system schema} \label{fig:ADS_st} \end{figure} Learning the normal behavior of complex stochastic systems is a prerequisite for anomaly detection (AD): better estimates of expected observations help detect abnormal cases. Advanced deep-learning methods have demonstrated unique capabilities for constructing or forecasting observations. But anomaly detection reaches far beyond behavior prediction (Figure\ \ref{fig:ADS_st}). A comprehensive anomaly detection system (ADS) must address several challenges: an accurate strategy for scoring and ranking the suspicion level of observations, an optimal threshold of anomaly scores to flag the system status, discard noise or irrelevant outliers, mitigate false alarm rates while maintaining high recall, and explain the causality of anomalous events. Advanced behavior prediction methods may decrease uncertainties but do not provide end-to-end anomaly detectors that overcome the challenges to accurately identify suspicious activities. This task calls for extra optimization and pruning, termed the “inference phase” hereafter. Uncertainties in anomaly detection---caused by open-ended definitions \cite{hawkins1980identification}, base-rate fallacy \cite{pokrywka2008reducing,sommer2010outside}, ambiguous anomalous boundaries, and noisy environments---make finding optimal thresholds for raising the `red flag' difficult. This holds even for simple scenarios like a one-detector ADS and more so in real-world scenarios with heterogeneous and distributed stream sets and various detectors. A slightly mistuned threshold can cause a huge false alarm rate or routinely miss vicious attacks by labeling them as normal \cite{bridges2017setting}. Due to subtle differences between {\it anomaly} and {\it malicious event}, high accuracy of anomaly detection does not guarantee reliable detection of malicious activities. While malicious events are generally rare or unusual, simply applying AD to flag them can yield way too many false positives. The real task is puzzling out anomalies of interest; among those, malicious activities are the anomalies caused by adversarial actors aiming to disrupt or physically harm a system \cite{tandon2009tracking}. Adversaries usually try to blend in with the distribution of normal points \cite{emmott2013systematic}, for instance by intentional swamping and masking of events. When attacks are not confined to extreme outliers or when the extreme outliers are not anomalous, it is difficult to distinguish anomalies of interest from normal points or inconsequential outliers. Anomalous points are expected to be mostly isolated; in some contexts though, anomalies generated by stealthy adversarial activities may mask or cancel out each other. ADSs that focus on the uniqueness of observations to find and score anomalies do not efficiently detect attacks. Also, there is a risk of clustered anomalies, which may occur if most attacks prompt the same type of reaction in the system. Thus, such observations might not appear as isolated anomalies in many rarity-based techniques \cite{chandola2009anomaly}. In particular, if anomalous points are tightly clustered, they would not be detectable by density-based methods. Additionally, false positives may originate from inaccurate or abrupt spikes in the error rate because of noisy data or mistuned models. For example, a data-driven behavior predictor may miss rare periodic patterns, resulting in sharp spikes in error values even when the behavior is normal \cite{shipmon2017time}. It is important to review strategies in behavior modeling techniques that decrease the likelihood of generating such anomalies. For all mentioned scenarios it is critical to know the related false alarm mitigation approaches. A comprehensive survey on anomaly detection techniques should provide the readers with pertinent information regarding the end-to-end process, including both behavior prediction and inference phases. Given such intuition, the readers would be able to pick and combine the methods based on their strength and the applied context. \textbf{Related Work.} Considerable work on anomaly detection originates from statistical research \cite{rousseeuw2005robust, hawkins1980identification, bakar2006comparative}. Studies from computer science have reviewed and surveyed computational AD concepts \cite{agyemang2006comprehensive, patcha2007overview, hodge2004survey}; the comprehensive research survey in\ \cite{chandola2009anomaly} deeply analyzes pros and cons of traditional AD methods. Other studies have collected and reviewed state-of-the-art methods in various application contexts. For example, time series AD methods have been thoroughly analyzed and categorized in \ \cite{gupta2014outlier} based on their fundamental strategies and input data types. Other studies have surveyed relevant topics in very specific domains, e.g., \cite{kwon2017survey} provide a comprehensive survey of deep learning based methods for cyber-intrusion detection; a broad review of deep AD for fraud detection is presented in \ \cite{adewumi2017survey} and internet of things (IoT) related AD is reviewed in\ \cite{mohammadi2018deep}. \textbf{Motivation and Challenges.} Behavior modeling applied in the training of anomaly detection algorithms has caught considerable attention in most review and survey papers, whereas the decision-making process of post pruning, threshold setting, anomaly scoring, and labeling, have widely been neglected. High false alarm rates tend to confuse data analysts when trying to distinguish normal from anomalous (erroneous) events by just relying on predictive models. This approach is very risky due to uncertain boundaries, continuously evolving behavior, and potential data drifts. And, `chasing ghosts' is a waste of valuable resources after all. Following a methodical approach, we review false alarm mitigation methods in anomaly detection contexts. The major contributions of our study are: \begin{itemize}[leftmargin=*] \item Building upon the extensive research and surveys on prediction and profiling based AD by significantly expanding the discussion toward anomaly scoring, threshold-setting techniques, and collective analysis, which can contribute to false alarm mitigation. We, also, identify unique assumptions regarding the nature of anomalies made by each technique. The combination of such assumptions and pruning steps is critical to discover the failure and successful contexts for that technique. \item Gathering a broad overview of the criteria that anomaly detection and anomaly scoring methods should address to be applicable to real-world problems. Such evaluations help to distinguish these techniques in a more precise realistic way and based on their abilities in finding anomalies which are inherently more complex. \end{itemize} \textbf{Organization.} The remainder of this paper is organized as follows. Section~\ref{sec:ProblemDefinition} first explores various anomaly definitions and interpretations, then provides an informal description of concepts related to anomaly detection on which the rest of the paper relies. Next, Section~\ref{sec:CCADS} formulates and justifies a collection of requirements which should be addressed in anomaly detection. In the next step, Section~\ref{sec:AFS_scaling} analytically and comprehensively reviews the methods to scale the false alarms rate, from the initial scoring stage to the decision making stage for raising the red flag. Section~\ref{sec:post_hoc} continues the false positive mitigation topic in a {\it post hoc} analysis. Finally, Section~\ref{sec:ResearchQuestion} highlights research questions and directions for future research. \section{Basic definitions} \label{sec:ProblemDefinition} We start by drilling down into different meanings of anomalies, followed by defining anomaly detection, anomaly scoring and recalling other key concepts and requirements involved in false alarm mitigation for anomaly detection systems. \subsection{What is an anomaly?} A general definition of anomaly (outlier) builds on Hawkins' statement in \cite{hawkins1980identification}: "An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism". The ambiguity in this abstract view leads to divergent goals of AD systems, which can strongly influence the results. Interpretations of anomaly include: \begin{enumerate}[leftmargin=*] \item \textbf{Anomalies are rare events}. Since unusual cases do not happen frequently, anomalies can be considered rare events. Adopting this interpretation means to find a soft or hard threshold for frequencies. A poor estimation of frequency may cause a huge rate of false positives/negatives. Also, results of methods focused on the rarity score of data points are often not comparable because rarity is hugely dependent on the AD algorithm. \item \textbf{Anomalies are distinct events}. Based on this description, any odd event is anomalous. Being different is not meaningful without having some implicit probability that shows the rank of belonging to a distribution or model. So, a threshold is required for determining "to what extent?". For example, the clustering methods \cite{portnoy2000intrusion} compute the point's anomaly scores based on their distance to the closest clusters, or the sparsity level of the cluster which it belongs to. Thus, they implicitly consider the distance or density as their differentiating measure. \item \textbf{Anomalies are abnormal events}. The observations which diverge from normal expectations are anomalous. Same to the previous definitions, finding the divergence degree is ambiguous and might be highly challenging. Also, the normal data should be labelled, while it is not the case in many the target domains like IDS \cite{ferragut2012new}. \end{enumerate} But, none of the above interpretations of anomaly detection are acquisitive enough to describe various types of anomalies and suspicious events in different domains. Based on \cite{ferragut2012new}, an ideal definition of anomaly should be applicable to all of the possible distributions without extra effort. Moreover, it should provide the possibility of comparing the anomalous degree of one target variable versus the anomalous level of the other variables. Thus, we provide our definition of an anomaly in a suspicious detection context based on the aforementioned specifications. ``An event is called a {\it suspicious anomaly} if it is highly distinct in terms of feature values, or clustered but unobserved previously, and persistent or close to the previous suspicious cases to reduce their noise likelihood. \subsection{What is anomaly scoring?} An ADS aims to order anomalies based on their anomalous score that assigns to the data points. Transferring the original order in feature space through a scoring function $S_{AD} : X \rightarrow R+$, is one of the very basic methods, which assigns smaller scores to the more anomalous points \cite{zuo2000general}. \subsection{Data drift and abrupt evolution of data} The term data drift implies that the statistical properties of the target variable or even input data have changed in the way that the predictive model is decayed and may lose its accuracy level \cite{vzliobaite2010learning}. For instance, customers' behavior in power consumption may change over time because of many reasons like restructuring their internal network; consequently, the consumption predictor is likely to become less and less accurate over time. Generally speaking, it is hard to determine the exact rate of data drifts in an unsupervised anomaly detection context. \section{Challenges in Anomaly Detection} \label{sec:CCADS} Anomaly scoring is challenging for many reasons including high dimensional spaces, stochastic behavior, potential data drift, seasonality or highly irregular data observation rates, uncertain environments, mixed data types, bounded available data, varying lags in the emergence of anomalous behavior in one dimension compared to the others, and unknown hidden factors. Therefore, strong anomaly detection and scoring methods should be capable to address these challenges sufficiently using observed data \cite{veasey2014anomaly}. AD methods are mainly evaluated according to their \textit{detection rate} (i.e., the ratio of correctly detected anomalies to total ones), and \textit{false-alarm rate} (i.e., the ratio of misclassified normal data points to the total number of normal points). However, we have devised and summarized a set of meta-criteria \cite{emmott2015meta, emmott2013systematic} for anomaly scoring as well as corresponding measures to evaluate the strength and performance of the algorithms in addressing the specific circumstances of cyberattack detection and protection. \subsection{Masking effect} One anomalous point \textit{masks} a second anomaly if the latter can be considered an anomaly only by itself but not in the presence of the first point. Thus, a \textit{masking effect} may occur if the estimated mean and covariance are skewed toward a cluster of outlying observations such that the outlying point is not sufficiently far from the mean that it can be detected \cite{ben2005outlier}. As a toy example of this scenario, a method's behavior that dynamically updates its threshold value is shown in Figure \ref{fig:swamp_mask}. Here, an attacker, aware of this ADS's strategy, gradually poisons the system; i.e. the attacker intentionally feeds the system with fake data in the range of confidence interval but different enough to make the ADS shift its threshold value. Thus, the event (\textit{A}) is in fact a masked attack (\textit{A}) missed by the manipulated ADS. Also, if the ADS looks for too few isolated cases, clustered anomalous points can influence the statistics so that none are declared anomalies \cite{liu2008isolation}. As an example, the rarity-based ADS, shown in Figure \ref{fig:rarity_mask}, has missed \textit{A} and \textit{B} because of the limited number of anomalous points assumption. \textit{semantic variation}, which represents the degree to which anomalies are dissimilar \cite{lavin2015evaluating}, is one the measures for evaluating the ability of an ADS to handle masking effects. \begin{figure}[h] \center \includegraphics[width=0.9\columnwidth]{Rarity_based.png} \caption{An example of rarity-assumption based masking} \label{fig:rarity_mask} \vspace{-3.5mm} \end{figure} \begin{figure}[h] \center \includegraphics[width=0.9\columnwidth]{Swamp_masking.png} \caption{A simple example of Masking and Swamping effects} \label{fig:swamp_mask} \vspace{-3mm} \end{figure} \subsection{Swamping effect} \textit{Swamping effect}, the reverse of the masking effect, occurs if the swamped events can be considered an anomaly only in the presence of other events. For instance, the false-positive events \textit{B–D} in Figure \ref{fig:swamp_mask}, are swamped by the orange observations. Outlying groups which skew the mean and covariance estimates toward themselves can push away normal events from the shifted mean to be isolated as anomalies \cite{ben2005outlier}. If an ADS overestimates the number of anomalies in a dataset, it can be influenced by the swamping effect. \textit{Point difficulty} is the measure which evaluates the swamping effect \cite{emmott2013systematic}. The point difficulty of each observation is measurable based on its likelihood of belonging to the other class in comparison to the current class label. Ideally, a method should be able to detect anomalies with higher point difficulty rates. \subsection{Variable frequency of anomalies} A rarity-based ADS normally performs well if the frequency of anomalies is low, ranging from 1--10\%, but may fail in other scenarios, e.g.\ DoS attacks which are more frequent (>30\%) \cite{kim2012robust, liu2008isolation}. \textit{Scenario 1} in Figure \ref{fig:rarity_mask} illustrates an ADS which misses a group of anomalies only because of the rarity assumption. The reliability of ADSs under variable frequency conditions is measurable based on its tolerance level under different degrees of \textit{relative frequency} without losing accuracy. The measure relative frequency (contamination rate) is defined based on the proportion of anomalous data instances \cite{emmott2013systematic}. \subsection{Curse of dimensionality} Access to more features and detectors decreases the risk of missing influential information when performing an ML task, but it is associated with fundamental research problems \cite{zimek2012survey}, like \textit{Irrelevant features}, \textit{Concentration of scores and distances}, \textit{Incomparable and uninterpretable scores}, and \textit{Exponential search space}. For example, each irrelevant feature increases the space dimensionality; and the sample size required by (naïve) density estimation methods tends to scale exponentially with dimensionality. Irrelevant features decrease precision and increase false alarm rates. With increasing dimensionality, the data domain space grows. So, the normal points may be pulled away from the others and be trapped in unimportant tails \cite{emmott2015meta}, while the anomalous cases are covered under the same unimportant similarities. \textit{Feature ranking} is a practical measure to verify a model's reliability for ignoring irrelevant features by paying attention to features according to their importance \cite{das2018a, amarasinghe2018toward}. This evaluation metric is only applicable to supervised data and highly dependent on the applied method. \subsection{Lag of emergence} In complex systems with complicated feature associations and causal relations the source of anomaly may trigger system features in various ways. For instance, some features may react to events sooner than others; therefore, the anomaly flag will be raised several times with various lags. Causal relations are one of the main reasons behind this fact, which generally leads to scenarios that dependent features indicate the same events with variable delays. Also, having irregular sampling rates in different dimensions is another potential reason for such lags. Features with a lower sampling rate (longer interval) may indicate extreme or anomalous events which have been already deciphered from other high-frequency features \cite{veasey2014anomaly}. Thus, an ADS is expected to aggregate the results obtained from heterogeneous subsystems and features to discover the exact point of a suspicious anomaly, rather than overwhelming the users with frequent alarms over the course of time. \subsection{Domain specific criteria} Defining domain-specific measures may help evaluate ADS capabilities in very specific target domains. For example, \cite{lazarevic2003comparative} applies "{\it burst detection rate (BDR)}" to capture potential bursts which indicate attacks involving many network connections. This measure represents the ratio of the total number of intrusive network connections with a higher score than the threshold to the total number of intrusive network connections within attack intervals. \section{Automatic false alarm mitigation} \label{sec:AFS_scaling} Stochastic and evolving normal behavior complicates the AD process. On the one hand, behavior predictors may not be able to find the exact underlying patterns of the data. On the other hand, a small mistake in the scoring or ranking process can lead to a huge amount of false positives or false negatives. This section reviews various approaches taken by statistical, data mining, or ML methods to decrease false alarm rates by scaling anomaly scores. Some of those perform scoring and ranking simultaneously as a unified task, whereas others assign a set of initial scores to observations, then rank them based on other potential available sources of information or collective analysis. We expect the anomaly scores assigned by an ADS to be at least in a {\it weak ordering} \cite{roberts2009}, so events can be ranked based on their deviation from the expectation. \subsection{Improved individual scoring} This category includes methods and techniques contributing to a better scoring than simple anomaly scoring based on the error vector, i.e.\ the differences between the observations and the expectations generated by a behavior predictor. \smallskip \subsubsection{\textbf{\textit{Probability-based scoring}}} Assuming a particular distribution for the dataset, the anomaly score of the observations is computable based on their probabilities and data statistics. Intuitively, if $x$ is less likely to happen, it is more anomalous. In other words, an anomaly score $A(x)$ respects the distribution ($f$), then $A(y) \leq A(x) \iff f(x) \leq f(y)$. This approach is deemed reasonable in under control conditions, but it is not a powerful technique in real-world scenarios \cite{bridges2017setting, ferragut2012new} because: 1) Data dispersion is full of noise and often far from the known statistical distributions; 2) It may ignore the rarity condition, i.e.\ with the same threshold, long tail distributions provoke many red flags versus shorter tails; also, 3) Obtained scores are not comparable and agreeable in complex configurations with multiple cooperative detectors. \noindent Variations of probability-based scoring include: \smallskip \textit{1.1) Bits of rarity.} Let's assume a model that provides a probability density distribution of values or errors \cite{tandon2009tracking}. A bits of rarity based anomaly score of an event $x$, with the probability density or mass function $f$, can be defined as: $$R_f(x) = -log_2(P_f(x))$$ This technique uses a one-to-one transformation of the predictor results to present a more explainable ranking. The log-scale transformation of scores helps to stabilize the computations and distribute the original probabilities, $0 \leq P_f(x) \leq 1$, to a larger range of values. Also, the negative sign assigns a higher anomaly score to more diverged events. Nonetheless, expanding the scores in the range of natural numbers however leads to a huge difference in the computed anomaly scores by various detectors, so that their results may not be comparable. \smallskip \textit{1.2) P-value scoring.} P-value of statistical tests is another traditional technique for determining potential outliers \cite{schervish1996p}. Because of its independence from probabilistic distribution functions (PDF), using the p-value to find anomalies is intuitive. Also, the p-value can rank all the observations based on their dissimilarity and rarity, as two major interpretation of anomalies; it puts a sharp bound on the {\it alert rate} only based on the probabilistic description, so, it narrows down the frequency of alarms for any random distribution. But the p-value only focuses on extreme cases as a subset of all target anomalies; i.e. the ones that lie outside the convex hull of most of the distribution mass \cite{veasey2014anomaly, bridges2017setting}. Also, it assumes that no other alternative hypothesis is available, which is often not the case in real-world settings. After all, the choice of the significance level to reject the null-hypothesis is crucial. \smallskip \textit{1.3) Bits of meta-rarity.} By not only considering an event's rarity but also the infrequency of its rarity level, bits of meta-rarity makes anomaly scores directly comparable \cite{ferragut2012new}. The formal definition of this measure is as follows: $$ A_f(x) = -log_2(P_f(f(X)\leq f(x)))$$ Thus, it provides a strict weak ordering of the observations based on their abnormality level, so that $ x >_a y$, if and only if $A_f(x) > A_f(y)$. Also, the assigned anomaly degree would exceed the given threshold value of $\alpha$ with a chance of not greater than $2^{-\alpha}$ ($P_f(A_f(x)>\alpha) \leq 2^{-\alpha}$). Due to being dependent only on $\alpha$, rather than also on the data distribution ($f$), the number of false alarms generated by this approach can be regulated. Moreover, considering the same theory, this approach allows for comparing $X$ and $Y$, generated by $f$ and $g$, based on $A_f(X)$ and $A_g(X)$, respectively. As Figure \ref{fig:mix_gaus} illustrates, this technique is capable of finding anomalous areas which are not necessarily extreme values like the very low probability points in a Gaussian mixture model. But it suffers from two main drawbacks: 1) The value of $\alpha$ can hugely affect the false positive rates; 2) Bits of meta-rarity generally ignores clustered anomalies. \begin{figure}[h] \center \includegraphics[width=0.7\columnwidth]{mutual_gaussian.png} \caption{Bits of meta-rarity of 0.2 in a mixture of normal distribution \cite{veasey2014anomaly}} \label{fig:mix_gaus} \vspace{-3.5mm} \end{figure} \smallskip \subsubsection{\textbf{\textit{Q-function based scoring}}} By presuming a Gaussian distribution for the error vector obtained from predictions and normal observations, some studies fit the anomaly scores based on a Q-function as a tail distribution function \cite{goldstein2016comparative, shipmon2017time, ahmad2017unsupervised}. For example, a Q-function scoring is used in \cite{malhotra2015long} to analyze error values. % At first, a stacked LSTM model is trained to predict the next $l$ values for $d$ dimensions of input variables. Then, as the predictor slides through the observations at $\{t-l, ..., t-1\}$, it forecasts the values of each time point, $x(t)$ for $l$ times. Next, the ADS generates an error vector, $e(t)$, as the difference between $x(t)$ and its predicted value at different time points ($t-j$) as: ($e(t) = [e(t)_{11}, ..e(t)_{1l}, .., e(t)_{d1},..., e(t)_{dl}]$). Then, it fits a multivariate Gaussian distribution to the error vectors ($N = N(\mu, \sigma )$). This means the likelihood ($p(t)$) of observing an error vector ($e(t)$) is equal to the value of $N$ at $t$. Thus, an observation $x(t)$ is \textit{anomalous} if $p(t) < \tau$; $\tau$ is computed by maximizing the $F\beta-score$ based on an unobserved validation set. A fast comparison between new error values and the compact representations of the prior cases is one of the main advantages of this method \cite{shipmon2017time, ahmad2017unsupervised}. However, the normal error distribution will be violated if the error values would not be random, which is likely in many data-driven methods \cite{hundman2018detecting}. \smallskip \subsubsection{\textbf{\textit{Similarity based scoring}}} These types of methods compute the anomaly score for new observations based on their distance to other groups of observations like the set of their $k$ neighbors. Distance-based techniques confront the threshold setting problem in the early stages of the AD process by looking for the appropriate distance (similarity) to distinguish the far points from the close ones. Thus, this step is comparable to behavior modeling in profile-based ADSs. Distance metrics can be grouped into the following three categories \cite{weller2015survey}: \begin{itemize}[leftmargin=*] \item \textit{Power distances.} Distance measures which use a formula mathematically equivalent to the power of $(p, r)$ as: $$ Distance(X, Y) = \left ( \sum_{i=1}^{n}\left | x_i - y_i \right |^p\right )^{\frac {1}{r}}$$ For example, Manhattan and weighted Euclidean distance should be included in this category. Advanced power distances are more practical but not necessarily intuitive and compatible with the physical distance concept \cite{deza2009encyclopedia}. \item \textit{Distances on distribution laws.} These describe the distance measures based on the probability distribution of the dataset, like the Bhattacharya coefficient \cite{patra2015new} or $\chi^2$ distance \cite{deza2009encyclopedia}, \item \textit{Correlation similarities.} This group characterizes the correlation between two datasets as a measure of similarity or distance, such as Kendall's $\tau$ rank correlation and learning vector quantization \cite{kohonen1990self}. \end{itemize} Utilizing appropriate distance metrics corresponding to the data distribution can improve the anomaly scoring phase. For example, due to the difference in feature distributions, Euclidean distance is not the right metric to capture the real distance of points from the mean of normal data. Thus, some anomaly detection studies \cite{wang2004anomalous, lazarevic2003comparative} take advantage of Mahalanobis metric \cite{mccrae1987creativity}, which is able to take into account the variable's variance and covariance besides the average value. Scoring anomalies using the similarity evaluation measures from the second and third groups is almost comparable to the probability-based and information theory-based scoring techniques, respectively, which are discussed in their related sections. In sum, distribution-aware measures are valuable regarding their improvements in the similarity evaluation phase \cite{weller2015survey} that contributes to a more reliable estimation of the divergence of new observations from normal expectations. \smallskip \subsubsection{\textbf{\textit{Extreme value theorem}}} \label{sec:EVT} The extreme value theorem is explored in \cite{siffer2017anomaly} to find distribution-independent bounds on the rate of extreme values for univariate numerical time series. This technique does not require any manual threshold setting but needs one parameter as the {\it risk factor} to control the number of false positives. The law of extreme values states that extreme events have similar kinds of distributions, regardless of the main data distribution as long as it is standard \cite{fisher1928limiting}: $$G_\gamma = x \rightarrow exp(-(1+\gamma x)^{\frac{-1}{\gamma}}), \gamma \in R , 1+\gamma x>0$$ $\gamma$ is called {\it extreme value index} and depends on the original distribution; e.g. it is zero for Gaussian ($N(0, 1)$). When events are extreme, the shape of distribution tails are almost similar, so $G_\gamma$ can represent all of them. Based on this theory, a streaming anomaly detector, called SPOT, is proposed in \cite{siffer2017anomaly}. In the first step, SPOT computes $z_q$ as the hard and $t$ as the soft threshold by fitting a generalized Pareto distribution and then utilizing an appropriate extreme value distribution. As Figure \ref{fig:st_update} illustrates, SPOT flags the data points that exceed the $z_q$ threshold as abnormal, while it keeps updating $z_q$ based on the rest of the observations (non-abnormal cases). Each non-abnormal case might fit one of the following scenarios: 1) \textit{Peak case.} It is greater than the initial threshold $t$, so adds the excess to the set of peaks and updates the value of $z_q$, 2) \textit{Normal case.} It is a common value. \begin{figure}[h] \center \includegraphics[width=0.9\columnwidth]{stationary_update.png} \caption{Updating anomaly scores in stationary streams \cite{siffer2017anomaly}} \label{fig:st_update} \vspace{-2mm} \end{figure} \subsection{Unified scoring} The issue of non-comparability and non-interpretability of different ADS results is targeted in \cite{kriegel2011interpreting}. Unified scoring \cite{kriegel2011interpreting} converts any arbitrary "anomaly factor" to the interpretable range of $[0, 1]$ as an indicator of the abnormality probability. Defined based on Hawkins' idea, this unification transformation method includes two steps, where either step might be optional (depending on the type of score ($S$)): 1) Regularization, basically maps a score $S$ to the interval $[0, \infty)$, so that ${Reg}_{S(o)} \approx 0$ represents inliers and ${Reg}_{S(o)} \gg 0$ indicates outliers; 2) A normalization to transform a score into the interval $[0,1]$. The applied transformation method should be {\it ranking-stable}, which means it should not change the ordering of the original scores.\\ However, the authors do not propose any solid algorithm to apply the mentioned mapping to arbitrary AD scoring and they only offer some abstract generic hints. \subsection{M-estimation scoring} \label{sec:M-estim} Based on the fact that the means of tail estimations indicate the anomalous level of data points in univariate domain spaces, M-estimator \cite{clemenccon2013scoring, clemenccon2018mass} is proposed to simulate the same property in higher dimensional space. This technique can address unsupervised scoring and ranking of anomalies in multivariate domain spaces. M-estimator captures the extreme behavior of the high-dimensional random vector $X$ based on the univariate variable $s(X)$, which can be summarized by its tail behavior near zero such that the smaller the score $s(x)$, the more abnormal is the observation $x$. This technique uses the mass volume ($MV$) curve as a functional performance criterion to estimate the density function. Then, it provides a strategy to build a scoring function $s^{(x)}$, where its $MV$ curve is asymptotically close to the empirical estimate of the optimum mass volume (${MV}^{\ast}$). In the next step, the functional criterion is optimized based on a set of piecewise constant scoring functions. In the end, the feature space is overlaid with a few well-chosen empirical minimum volume sets as Figure \ref{fig:pw-MV-opt} illustrates. \begin{figure}[h] \center \includegraphics[width=0.9\columnwidth]{pw-mv-optimal.png} \caption{left: Piece-wise adaptive approximation of $MV^{\ast}$ and right: associated piece-wise scoring function \cite{clemenccon2018mass}} \label{fig:pw-MV-opt} \vspace{-2mm} \end{figure} \subsection{Improved Threshold Computation} This section reviews a set of customized threshold setting strategies that can help mitigating false alarm rates. \smallskip \subsubsection{\textbf{Receiver Operating Characteristic curve (ROC)}} This is a graph showing the performance of a classification model at all potential thresholds. ROC or Precision-Recall Curve are some of the common techniques used by supervised AD methods to find the best error threshold for discretizing the ranked observations. Since this technique is not suitable for an unsupervised context, it should be replaced with the MV curve \cite{clemenccon2013scoring, clemenccon2018mass, goix2016evaluate}, as explained in \ref{sec:M-estim}. \smallskip \subsubsection{\textbf{Dynamic threshold}} In \cite{hundman2018detecting}, a dynamic threshold based approach is proposed for evaluating residuals to address non-stationarity and noise issues in data streams. It includes following two main steps:\\ \noindent{\textit{a. Error computation and smoothing.}} It computes a one-dimensional error vector based on the expectation and observation values. Then, it smooths this error vector using exponentially-weighted average to get $error_s$ : $${error}_s =[e_s(t-h),...,e_s(t-1), e_s(t)]$$ where $h$ determines the number of historical error values used to evaluate the current errors. \noindent{\textit{b. Threshold calculation and anomaly scoring.}} It optimizes a threshold, $\epsilon$, so that removing all the values above it lead to the greatest percent decrease in the mean and standard deviation of the smoothed errors $e_s$. Then, the normalized score of the highest smoothed error, $e_s(i)$, in each sequence of anomalous errors will be determined based on its distance from the chosen threshold. However, finding the best value for $z$ is extremely context dependent. So, the problem of threshold optimization remains, but in smaller scales. Also, adding gradual anomalies to data can lead the system to increase the threshold value and miss the real attacks. \subsection{Sequence based scoring} Having a bird's-eye view from the whole sequence can lead to a higher recall, but lower false alarm rates \cite{ahmed2017thwarting, zohrevand2016hidden,zohrevand2020dynamic}. However, it comes in the cost of losing real-time response, unless the ADS gradually performs anomaly detection \cite{zohrevand2016hidden}. This section reviews three different techniques to apply the collective relation of data points in scoring the whole sequence. \smallskip \subsubsection{\textbf{Information theory based AD}} Traditionally, information-theoretic measures like {\it (Conditional) Entropy, Relative Entropy}, and {\it Information Gain} were very popular in tracking the anomaly likelihood in the data. For example, the conditional entropy, $H(X|Y)$, of the system call subsequences, can help to determine the suspicious traces \cite{lee2001information}. Also, the computed relative entropy could help to validate the model's quality on the new observations and detect data drifts. \smallskip \subsubsection{\textbf{Likelihood ratio method}} The probability of any data point in this family of strategies is computable considering the precedent observations. Therefore, a sequential anomaly score is a function of the data points' likelihood in the sequence. So, a sequence with a very low generation probability should be marked as an anomaly. Many ADS studies and applications in various areas like intrusion detection and speech recognition have applied different modifications of this strategy. Three high-level methods in this category are: \begin{itemize}[leftmargin=*] \item \textit{Finite State Automata (FSA).} It trains an FSA and if tracing a sequence ($x$) in the FSA ends up to a state without an outgoing edge to the next value in the test sequence, it will be labeled as an anomaly \cite{chandola2009anomaly}. \item \textit{Markov Models.} It obtains the conditional probability of the observed symbols and their transition to each other. Anomalous series will be distinguished based on their lower generation probability \cite{sun2006mining}. \item \textit{Hidden Markov models (HMM).} An HMM learns the underlying patterns of training sequences. Then, the likelihood of a test sequence, generated by the HMM, will be verified using decoding algorithms like {\it Viterbi algorithm} \cite{zohrevand2016hidden}. \end{itemize} A distinguishing property of this group of techniques is their ability to consider observations' correlation and order to score the whole sequence. However, this category of analysis decreases the possibility of detecting short-term abrupt differences as an anomaly. \subsection{Collective analysis} The methods described in this section take advantage of some extra information like previous observations, contextual, or correlation information to improve the score or verify the labels assigned to these observations. \smallskip \subsubsection{\textbf{Voting-based methods}} Applying a hybrid of various detectors to detect anomalies is very promising and decreases the chance of raising a false alarm. Even, some studies apply voting based on a combination of the current generated alarms and the obtained historical feedback from system administrator to rank alerts. For example, a voting technique based on HMM models is proposed in \cite{zohrevand2016hidden} to perform anomaly scoring. This ADS confirms the anomaly detection results by applying a group of reference windows (RW), in which their context is very similar to the current test window (TW). If both windows have a similar overall transition, the detected anomaly in the lower level is unreasonable, and its anomaly score should be decreased based on the similarity ratio. Otherwise, the anomaly score obtained in leaf nodes is increased according to the inverse of the similarity ratio. \smallskip \subsubsection{\textbf{Rolling feature based method}} An online collective analysis technique is proposed in \cite{zohrevand2020dynamic} to handle \textit{flagging} of suspicious events based on anomaly detection techniques. It assumes that attacks aiming at severely disrupting the system usually cause lasting cascading effects, so a persistent anomalous interval is more suspicious of being attack than a single strike caused by sensor noise or predictor faults. Thus, this study performs dynamic flagging based on deviation and persistency trade-off. To this end, it computes the moving average ($\bar{M^s}$) of the standardized error vectors ($\bar{Z^s}$) within the temporal window ($w_{p}$), selected as the average number of steps that an anomalous event is typically expected to last to be associated with an attack. $${m^s_t = m^s_{t-1} + (z^s_t - z^s_{t-w_p})/{w_p}} \vspace{-1mm}$$ Thus, the first observation ($x^s_i$) with $m^s_t$ value higher than the threshold triggers raising a red flag for a local attack, provided that at least 50\% of points have been detected as anomalous in the considered $w_{p}$, which is the direct predecessor of $x^s_i$. \smallskip \subsubsection{\textbf{Scoring in different levels of granularity}} Anomaly scoring in different levels of granularity allows traceability down to the finer granularity level and decreases the chance of missing those low-level patterns. The obtained anomalies in lower levels can be grouped to ultimately find anomalous cases in the subsystem level. A very logical break-down in the network IDSs is modeling detectors in the node and network level to be able to trace both focused and distributed attacks. Another approach, in this context, is applying break-down and aggregation on the time dimension to trace and balance the scores based on their short and long-term stability. For instance, \cite{zohrevand2016hidden} applies a hierarchical confirmation procedure to improve accuracy. From another point of view, this method applies global Markovian models in higher levels of the hierarchy to verify the leaf nodes' results. The overall approach in the higher levels is very similar to the lowest level, except that the aim of these comparisons focuses on the state transitions instead of the real values of observations. The hierarchy of granularity can be defined based on various features, like the space dimension. For example, to find the Spatio-temporal objects whose thematic attributes are significantly different from those of the other objects, \cite{cheng2006multiscale} proposes a method based on multi-granularity and cluster differentiation \smallskip \subsubsection{\textbf{Alarm correlation}} Constructing the potential attack scenarios based on the data aggregation is one of the other strategies in mitigating false alarms. The aggregation process can be performed by grouping a bunch of alarms possibly generated by different detectors or in different places of network or sequence and then reconstructing the attack scenarios. Performing correlation analysis and generating the possible scenario help to extract some concrete and interpretable inferences to decrease the false-alarm rates. Correlation analysis can be considered in different levels of abstraction, like the correlation of events from the same or heterogeneous detectors, correlation of events in one detector through time dimension, correlation of events through different nodes in network \cite{hubballi2014false}. Some alarm correlation analysis techniques are: \begin{itemize}[leftmargin=*] \item \textit{Multi-step correlation.} By assuming that a sequence of actions is required to conduct a malicious mission in the system, finding a correlation of the observed anomalies may help to assert the malicious event, before it happens \cite{ning2002analyzing, hubballi2014false}. One of the relevant techniques is applying frequent pattern mining to track frequent IDS alarm combinations as the indicators of malicious sequences \cite{sadoddin2009incremental}. \item \textit{Causal relation based correlation}. It verifies the causality correlation between existing variables and detectors. For instance, by considering the Bayesian network of nodes as alarms and edges as relationships obtained from the time-based coincidence of alerts and their mutual information, the system can generate hyper-alerts \cite{qin2007discovering,zohrevand2020dynamic}. \item \textit{Subsystem graph based correlation.} A system typically includes many subsystems with cascading influence on each other. So, an attacker may use a weakly protected subsystem, because of being known as low impact vulnerability, as a footrest to reach the most critical subsystems and servers in the network. Thus, some IDSs identify the possible penetration to critical systems by focusing on the existing dependency and interconnections in the network of systems and balance anomaly scores based on the penetration paths which the events might cause \cite{roschke2011new,valdes2000approach}. \end{itemize} \smallskip \subsubsection{\textbf{Alarm verification}} The whole idea of such post-analysis approaches is verifying whether the detected unusual events will impact the system. Such verifications help ADS to categorize the detected cases based on their impact seriousness \cite{hubballi2014false,bolzoni2007atlantides}. There are two types of verification mechanisms: \textit{Passive verification} and \textit{Active verification}. The former performs the verification process versus a database including possible success cases, while the latter verifies the generated alarms, in an online manner, which seems more promising to detect zero-day attacks and be applicable in the context of stream data anomaly detection. However, these strategies may fail if attackers generate some spurious patterns of responses to misguide the IDS to believe that the attacks will fail. For example, in signature-based methods, there is a class of {\it Mimicry attacks} that the attacker sends a fake normal response on behalf of the server. Thus, IDS fails to detect malicious behavior and ignores the alarm \cite{todd2007alert}. \smallskip \subsubsection{\textbf{Apply contextual information}} There are three major ways that utilizing contextual information can help ADS to improve their precision levels: \begin{itemize}[leftmargin=*] \item \textit{Multivariate analysis.} It considers contextual information as the extra features to the existing data and train a multivariate model \cite{zohrevand2016hidden}. \item \textit{Error balancing.} It performs an early prediction and readjust the results by benefiting from contextual information \cite{zohrevand2017deep}. \item \textit{Post verification.} It utilizes contextual information in the post-verification phase to prune unrelated anomalies and false alarms \cite{radon2015contextual}. \end{itemize} \section{Post-hoc mitigation of false alarms} \label{sec:post_hoc} The precision of ML methods is highly dependent on the comprehensiveness of the available observations during the training phase. This issue is even more common in profile-based anomaly detection approaches because they are influenced by training data in two ways: 1) to fit an accurate prediction model, and 2) to set a precise isolation boundary. This is while, data streams are prone to various drifts (like trend and abrupt evolution), which make the model built on old data inconsistent with the new data and might increase false-positive rates \cite{hundman2018detecting}. Thus, this section reviews a set of strategies mainly focused on utilizing user feedback or the history of observations to readjust the threshold for relabeling the data points. These types of techniques are particularly beneficial in non-real-time configurations like fraud detection or other contexts, in which an ADS should prioritize the suspicious cases and there is no need for instant reactions. But some ADSs also take advantage of the obtained knowledge from this step to improve their future scoring process. \subsection{Maximum error value based pruning} The ADS proposed in \cite{hundman2018detecting} prunes the detected anomalies based on the maximum value of all the observed anomalous points. To this end, this method keeps track of a set, called $e_{max}$, containing top anomalous values ($e_a$) of error sequences, $E_{seq}$, sorted in descending order, plus the maximum smoothed error value that is not anomalous: $$e_{max} = max({e_s \in E_{seq} |e_s \in e_a })$$ Then, it goes through this sequence and computes the values of the percentage decrease as: $$d^{(i)} = (e_{max}^{(i-1)} - e_{max}^{(i)} ) / e_{max}^{(i-1)} | i \in {1, 2, ..., (|E_{seq}|+1)}$$ Next, it considers the threshold of $p$ as the minimum percentage decrease to hold the anomalous status. If at a specific step ($i$) the threshold $p$ is exceeded by $d^{(i)}$, the anomalous status of all the cases before that ($e^{(j)} \in e_{max} | j < i)$ remains valid. While, if the $p$ threshold is not met by $d^{(i)}$ and all its subsequent errors ($d^{(i)}, d^{(i+1)}, . . . , d^{(i+|E_{seq} |+1)}$), those smoothed error sequences will be reclassified as normal. \subsection{Rarity based post pruning} The main idea in this technique is based on the rarity assumption in the AD context. For example, the method applied in \cite{hundman2018detecting} is configured to consider a minimum frequency for the number of observed anomalies in the same magnitude such that it classifies the future occurrences of this frequent category as normal. The prior anomaly scores for a stream data can be used to set an appropriate threshold, depending on the desired balance between precision and recall. However, this pruning step is highly prone to be misled by the attackers and gives the green card to malicious behaviors. \subsection{Active-learning based scoring} If the ADS has a mechanism by which human analysts can label a subset of data, the system can take advantage of the provided labels to set a threshold, $s_{min}$, for a given stream based on the lower and upper bound score of the confirmed anomalies. ADS proposed in \cite{das2018a} finds anomalies by designing a hyper-plane that passes through the uncertainty regions based on the learned decision boundaries through active learning. The best threshold should be found to customize this hyper-plane. To minimize learning interaction with the end-user, it assumes that the analyst can only label a limited batch of $b$ instances in each feedback iteration, so this set should be diverse and impact. This technique is performed in three following steps: 1) Select $Z$ candidates as the top-ranked instances, 2) Selects S compact subspaces that contain $Z$, 3) Selects the set $Q$ ($Q \subset Z$) including $b$ instances, which belong to minimal overlapping regions. \section{Improved predictor models} Applying a predictor model \cite{gamboa2017deep, malhotra2015long}, capable of capturing the latent complex patterns of the data, will unquestionably contribute to higher recall and lower false alarm rates. This section briefly outlines several predictors suitable for anomaly detection. \subsubsection{Robust anomaly detection} The number of data points in the anomalous clusters or their distances to the normal cases should not influence the decision boundary, otherwise, the ADS may overlook part of the anomalies (masking) or mislabel some normal cases as anomalous (swamping). Thus, some studies \cite{xiong2011direct, DBLP:journals/widm/RousseeuwH11} are focused on improving the quality level of the underlying applied method, like applying robust PCA instead of PCA, or robust matrix factorization, which can end up in better false alarm and recall rates. \subsubsection{Advanced data driven models} Traditional algorithms are not scalable enough to handle complex behavioral patterns in big data. Besides, reducing dimensions or selecting the most important features out of thousands of dimensions is not a trivial task for traditional models. While deep models are qualified to address all of the mentioned challenges. There are a variety of studies which apply different deep models as predictor model to perform anomaly detection: 1) Discriminative models: RNNs like long short term memory (LSTM), gated recurrent units (GRU) \cite{malhotra2015long, nanduri2016anomaly}, convolutional neural networks (CNN) \cite{vinayakumar2017applying}, deep neural networks (DNN) \cite{akhter2012detecting}; 2) Generative models: different versions of auto-encoders \cite{sakurada2014anomaly, an2015variational} and sum-product networks \cite{poon2011sum}; and 3) Generative adversarial networks (GAN) \cite{schlegl2017unsupervised}. For more detail, we refer the reader to the following research surveys \cite{kwon2017survey, adewumi2017survey, mohammadi2018deep}. \subsubsection{Hybrid models} Due to achieving locally optimal results and considering a limited number of hypotheses, none of the existing individual models can be considered a perfect approach. To address this problem, ensemble and hybrid methods that find the optimum non-homogeneous decision boundaries between normal and anomalies \cite{kazienko2013hybrid,das2018a} can be utilized. For example, a hybrid of model-driven and data-driven methods is utilized in many studies to cover the potential weaknesses of each side \cite{khashei2011novel, egrioglu2013fuzzy, zohrevand2017deep}. Also, deep hybrid models \cite{erfani2016high,javaid2016deep,weston2012deep,poultney2007efficient} use an unsupervised technique prior to the task of interest to learn the reduced representative features. \section{research questions} \label{sec:ResearchQuestion} Upon studying a broad range of works on false alarm mitigation, we call special attention to important open issues: \begin{itemize}[leftmargin=*] \item \textit{Evaluation based on public benchmarks.} Most of the works evaluate their methods based on a local or custom dataset. Analyzing the methods' performance in terms of the false positives reduction ratio on a common dataset, like NAB or PyOD \cite{lavin2015evaluating,zhao2019pyod} can help to understand their usefulness and applicability. \item \textit{Evaluation based on common scoring mechanisms.} Results reported by many of the studies are often not clear indicators of a method's capabilities in different respects, as addressing the challenges mentioned in Section \ref{sec:CCADS}. Collecting measures and labeled datasets to evaluate each arbitrary method in all its respects is a necessity. \item {\it Uniformity.} Since most of the techniques use a custom range and format, comparing them is not straightforward. Thus, a uniform format to present anomaly scores and their ranking is required. Besides enhancing comparability, uniformity provides the possibility of ensembling several ADS methods. {\item Performance.} The resources required by the AD algorithm, including time and space complexity, are also important characteristics for damage control, particularly in real-time applications. Most studies do however not report performance aspects. \item {\it Addressing data drift.} Real-world stream datasets typically include a gradual or sharp shift over time. Methods should adjust themselves using incremental learning or other methodical approaches to address drifts and evolutions which happen in the target domain \end{itemize} \section{Conclusions} To the best of our knowledge, a comprehensive study on the critical steps of false alarm mitigation in the anomaly detection context is badly needed. This paper provides an analytic review of methods found in the literature at the time of writing. Most of the studied methods focus on improving the behavior modeling phase, while the final scoring can immensely influence the system applicability. We have collected here the known strategies that can contribute to reduce the false alarm rate. \textit{Predictive model improvement} focuses on extracting complex latent patterns from data; the \textit{anomaly scoring improvement} strategies take advantage of the rarity and probability values to score anomalies. The essential techniques of \textit{threshold computation} aim to overcome the difficulties of finding the best threshold to distinguish normal and anomalous cases. \textit{Post-hoc pruning} processes are another type of strategies to update the threshold related parameters based on the running system performance. {\it Collective analysis} strategies rescale assigned anomaly scores based on a collection of observations, like their correlation with each other. Also {\it sequence based scoring} approaches and their applications to mitigate false alarm rates by analyzing sub-sequences are briefly reviewed. Despite the many existing approaches, further improvements are needed for mitigating false alarms so as to make anomaly detection practicable for real-world application domains, e.g.\ in the cyber arena, for threat detection and blocking increasingly sophisticated zero-day exploits. After all, even a very small false-alarm rate can mean an overwhelming absolute number of false positives---a notorious challenge which to overcome is still a long way to go.
{ "timestamp": "2020-09-01T02:21:10", "yymm": "1904", "arxiv_id": "1904.06646", "language": "en", "url": "https://arxiv.org/abs/1904.06646" }
\section{introduction} \label{s_uvod} A convex \emph{polytope} $P$ can be defined as a bounded intersection of a finitely many halfspaces. More precisely, it is a bounded solution set of a finite system of linear inequalities: \[ P=P(A,b):=\{x \in \mathbf{R}^n \mid \langle a_i,x\rangle\geqslant b_i, \; \; 1 \leqslant i \leqslant m\}, \] where $A \in \mathbf{R}^{m \times n}$ is a real matrix with rows $a_i$, and $b\in \mathbf{R}^m$ is a real vector with entries $b_i$. Here, boundedness means that there is a constant $N$ such that $\Vert x \Vert \leqslant N$ holds for all $x \in P$. Also, convex polytope can be defined as a convex hull of a finite set of points in $\mathbf{R}^n$. Although equivalent (\cite[Theorem~1.1]{Z95}), these two definitions are essentially different from an algorithmic point of view. Through this paper, we use both. Since we consider only convex polytopes, we omit the word ``convex''. The \emph{dimension} of a polytope $P$, denoted by $\dim(P)$, is the dimension of its affine hull. A polytope of dimension $d \leqslant n$ is written as \emph{$d$-polytope}. For a hyperplane $H$, the intersection $P\cap H$ is called a \emph{face} of $P$ when $P$ lies in one of the halfspaces determined by $H$. If $P\cap H\neq \emptyset$, $H$ is a \emph{supporting hyperplane}. We say that a face $F$ of $P$ is parallel to the given hyperplane $\pi$ when there is a hyperplane $H$ parallel to $\pi$ which defines $F$. Faces of dimensions 0, 1, and $d-1$ are called vertices, edges, and facets, respectively. The sets of vertices and facets is denoted by $\mathcal{V}(P)$ and $\mathcal{F}(P)$, respectively. A $d$-polytope is called \textit{simple}, if each of its vertices belongs to exactly $d$ facets (equivalently, to exactly $d$ edges). For the polytope $P=P(A,b)$, the halfspace defined by $i$th inequality $\langle a_i,x\rangle\geqslant b_i$ is called \emph{facet-defining}, when $\{x \in P \mid \langle a_i,x\rangle = b_i\}$ is a facet. Hence, $-a_i$, an \emph{outward normal vector} to that halfspace, is an outward normal vector to that facet. For an equation that corresponds to the hyperplane $\pi$, the \emph{halfspaces} $\pi^{\geqslant}$ and $\pi^{\leqslant}$ are defined as $\pi$, save that ``$=$'' is replaced by ``$\geqslant$'' and ``$\leqslant$'', respectively. For an arbitrary polytope $P$, $\pi^\geqslant$ is \emph{beneath} a vertex $V \in P$ when $V$ belongs to $\pi^>$ and also, we say that $\pi^\geqslant$ is \emph{beyond} $V$ when $V$ does not belong to $\pi^\geqslant$. A \textit{truncation} tr$_FP$ of $P$ in its proper face $F$ is a polytope $P \cap\pi^\geqslant$, where $\pi^\geqslant$ is beneath every vertex not contained in $F$ and beyond every vertex contained in $F$. This truncation is \emph{parallel} when $F$ is parallel to $\pi$. In this paper, we assume that all truncations are in the faces that are not facets. A simple polytope named \textit{permutoassociahedron} belongs to the family that generalises a well known family of polytopes called \textit{nestohedra}, i.e.\ \textit{hypergraph polytopes} (see \cite{FMP94}, \cite{P09} or \cite{PRW08}). Nestohedra appear in many fields of mathematics, especially in algebra, combinatorics, geometry, topology and logic. Roughly speaking, we can understand this family as polytopes that can be obtained by truncations in the vertices, edges and other proper faces of $d$-\textit{simplex}. The recipe that prescribes which faces of simplex will be truncated can be defined with respect to a \textit{building set}, which is a special kind of a \textit{hypergraph} (see \cite{P09}). Thus, we get simplices as the limit case in the family, when building set is minimal and where no truncation has been made. As the limit case at the other end, when building set is maximal and where all possible truncations have been made, we have \textit{permutohedra}. There are also other well-known members of this interval, but for needs of this work, beside permutohedron, the most important is an \textit{associahedron} or \textit{Stasheff polytope} (see~\cite{S63}). The permutoassociahedron arises as a ``hybrid'' of these two nestohedra. In order to bring the reader closer to our motivation to investigate this compound and have a clearer understanding of its nature and combinatorics, we recall of some combinatorial characteristics of its building elements. For more details on permutohedra and associahedra, we refer to \cite{Z95}, \cite{T06}, \cite{CD06} and \cite{S63}, \cite{BP15}, \cite{S97}, \cite{P09}, respectively. Combinatorially, the permutohedron is a polytope whose vertices correspond to words obtained by all permutations on $n$ different letters. It can be realised by an $(n-1)$-polytope $\mathbf{P}_n$, whose vertices are obtained by permuting the coordinates of a given generic point in $\mathbf{R}^n$. Thus, cardinality of the set $\mathcal{V}(\mathbf{P}_n)$ is $n!$. Two vertices are adjacent if and only if their corresponding permutations can be obtained from one another by transposition of two consecutive coordinates, i.e.\ consecutive letters. Figure~\ref{s:pn} depicts $\mathbf{P}_n$ for $n\in \{2,3,4\}$. \begin{figure}[h!h!h!] \begin{center} \begin{tabular}{ccc} \begin{tikzpicture}[scale=0.6] \draw (-1,0) node[below] {$ab$}-- (1,0)node[below] {$ba$}; \filldraw [black] (-1,0) circle (1.4pt) (1,0) circle (1.4pt); \end{tikzpicture} &\hspace{-0.5cm} \begin{tikzpicture} [scale=0.9] \draw (-1,0) node[below]{$abc$} -- (1,0)node[below]{$bac$} -- (1.5,1) node[below, xshift=0.2cm]{$bca$} -- (0.5,3) node[above]{$cba$}--(-0.5,3)node[above]{$cab$}--(-1.5,1) node [below, xshift=-0.2cm]{$acb$}-- cycle; \filldraw [black] (-1,0) circle (1pt) (1,0) circle (1pt) (-0.5,3) circle (1pt) (1.5,1) circle (1pt) (0.5,3) circle (1pt) (-1.5,1) circle (1pt) ; \end{tikzpicture} & \hspace{-0.8 cm} \begin{tikzpicture} [scale=0.9] \draw (-1,0) node[below]{\small {$dabc$}} -- (1,0)node[below]{\small {$dacb$}} -- (1.2,0.7) node[above]{\small {$adcb$}} -- (-1.2,0.7) node[above]{\small {$adbc$}}--cycle; \filldraw [black] (-1,0) circle (1pt) (1,0) circle (1pt) (1.2,0.7) circle (1pt) (-1.2,0.7) circle (1pt); \draw (1,0) -- (1.3,0.35) node[below, xshift=0.3cm, yshift=0.1cm]{\small {$dcab$}} -- (2.5,2) node[below, xshift=0.2 cm]{\small {$cdab$}}-- (2.7,2.45)node[above, xshift=0.4cm,yshift=-0.1cm]{\small {$cadb$}}-- (1.7,1.4) node[below, xshift=0.3 cm]{\small {$acdb$}} -- (1.2,0.7) ; \filldraw [black] (1.3,0.35) circle (1pt) (2.5,2) circle (1pt) (2.7,2.45) circle (1pt) (1.7,1.4) circle (1pt); \draw (-1,0) -- (-1.3,0.35) node[below, xshift=-0.3cm, yshift=0.1cm]{\small {$dbac$}} -- (-2.5,2) node[below, xshift=-0.2 cm]{\small {$bdac$}}-- (-2.7,2.45)node[above, xshift=-0.4cm, yshift=-0.1cm]{\small {$badc$}}-- (-1.7,1.4) node[below, xshift=-0.2 cm]{\small {$abdc$}} -- (-1.2,0.7) ; \filldraw [black] (-1.3,0.35) circle (1pt) (-2.5,2) circle (1pt) (-2.7,2.45) circle (1pt) (-1.7,1.4) circle (1pt); \draw (-1.7,1.4) -- (-0.5,3.45) node[above,xshift=0.1cm]{\small {$abcd$}} -- (0.5,3.45) node[above,xshift=-0.1cm]{\small {$acbd$}}-- (1.7,1.4) ; \filldraw [black] (-0.5,3.45) circle (1pt) (0.5,3.45) circle (1pt); \draw (-0.5,3.45) -- (-1.7,4.15) node[above]{\small {$bacd$}} -- (-2.7,2.45) ; \filldraw [black] (-1.7,4.15) circle (1pt); \draw (0.5,3.45) -- (1.7,4.15) node[above,xshift=0.2cm]{\small {$cabd$}} --(1,4.35) node[above]{\small {$cbad$}} --(-1,4.35) node[above]{\small {$bcad$}}--(-1.7,4.15) ; \filldraw [black] (1.7,4.15) circle (1pt) (1,4.35) circle (1pt) (-1,4.35)circle (1pt); \draw (1.7,4.15) -- (2.7,2.45); \draw[dashed] (-1,4.35) -- (-0.8,3.2)node[above,xshift=-0.4cm,yshift=-0.1cm] {\small{$bcda$}}--(0.8,3.2)node[above,xshift=0.4cm,,yshift=-0.1cm] {\small{$cbda$}}--(1,4.35); \filldraw [black] (-0.8,3.2) circle (1pt) (0.8,3.2) circle (1pt); \draw[dashed] (-0.8,3.2) -- (-1.2,2.8)node[above,xshift=-0.35cm,yshift=-0.1cm] {\small{$bdca$}}--(-2.5,2); \filldraw [black] (-1.2,2.8) circle (1pt); \draw[dashed] (-1.2,2.8) -- (-0.3,1.35)node[above,xshift=-0.1cm] {\small{$dbca$}}--(0.3,1.35) node[above,xshift=0.1cm] {\small{$dcba$}}--(1.2,2.8) node[above,xshift=0.35cm,yshift=-0.1cm] {\small{$cdba$}}--(0.8,3.2); \filldraw [black] (-0.3,1.35) circle (1pt) (0.3,1.35) circle (1pt) (1.2,2.8) circle (1pt); \draw[dashed] (1.2,2.8) --(2.5,2) ; \draw[dashed] (0.3,1.35) --(1.3,0.35) ; \draw[dashed] (-0.3,1.35) --(-1.3,0.35) ; \end{tikzpicture} \end{tabular} \end{center} \caption{Permutohedron $\mathbf{P}_2$, $\mathbf{P}_3$ and $\mathbf{P}_4$} \label{s:pn} \end{figure} The associahedron $\mathbf{K}_n$ is an $(n-2)$-polytope whose vertices correspond to complete bracketings in a word of $n$ different letters. Hence, the total number of its vertices is the $(n-1)$th \textit{Catalan} number, i.e.\ cardinality of the set $\mathcal{V}(\mathbf{K}_n)$ is \[\frac{1}{n}\binom{2n-2}{n-1}.\] Two vertices are adjacent if and only if they correspond to a single application of the associativity rule. The $k$-faces of the associahedron are in bijection with the set of correct bracketings of an $n$ letters word with $n-k-1$ pairs of brackets. Two vertices lie in the same $k$-face if and only if the corresponding complete bracketings could be reduced, by removing $k$ pairs of brackets, to the same bracketing of the word of $n$ letters with $n-k-1$ pairs of brackets. Figure~\ref{s:kn} depicts $\mathbf{K}_n$ for $n \in \{3,4,5\}$. \begin{figure}[h!h!h!] \begin{center} \begin{tabular}{ccc} \begin{tikzpicture}[scale=0.6] \draw (-1,0) node[below] {$(ab)c$}-- (1,0)node[below] {$a(bc)$}; \filldraw [black] (-1,0) circle (1.4pt) (1,0) circle (1.4pt); \end{tikzpicture} &\hspace{-0.85cm} \begin{tikzpicture} [scale=0.9] \draw (-1,0) node[below]{$a((bc)d)$} -- (1,0)node[below]{$a(b(cd))$} -- (1.3,1) node[above, xshift=0.6cm]{$(ab)(cd)$}--(0,3.2)node[above]{$((ab)c)d$}--(-1.3,1) node [above, xshift=-0.6cm]{$(a(bc))d$}-- cycle; \filldraw [black] (-1,0) circle (1pt) (1,0) circle (1pt) (0,3.2) circle (1pt) (1.3,1) circle (1pt) (-1.3,1) circle (1pt) ; \end{tikzpicture} & \hspace{-0.75cm} \begin{tikzpicture} [scale=1.5] \draw (-0.75,0) node[below]{\small {$((a(bc))d)e$}} -- (1.3,0)node[below]{\small {$(a((bc)d))e$}} -- (0.5,2.2) node[above, yshift=-0.1cm]{\small {$(a(b(cd)))e$}} -- (-0.5,2.2) node[above,xshift=-0.85cm,yshift=-0.25cm]{\small {$((ab)(cd))e$}}--(-1.05,0.5) node[below,xshift=-0.2cm]{\small {$(((ab)c)d)e$}}-- cycle; \filldraw [black] (-0.75,0) circle (0.75pt) (1.3,0) circle (0.75pt) (0.5,2.2) circle (0.75pt) (-0.5,2.2) circle (0.75pt) (-1.05,0.5) circle (0.75pt); \draw (1.3,0) -- (1.7,0.3)node[below,xshift=-0.95cm, yshift=0.3cm]{\small {$a(((bc)d)e)$}} -- (0.9,2.5) node[above, xshift=0.9 cm, yshift=-0.2 cm]{\small {$a((b(cd))e)$}} -- (0.5,2.2); \filldraw [black] (1.7,0.3) circle (0.75pt) (0.9,2.5) circle (0.75pt) (0.5,2.2) circle (0.75pt); \draw (0.9,2.5) -- (0.7,2.7)node[above, xshift=0.5cm]{\small {$a(b((cd)e))$}} -- (0.2,2.7) node[above,xshift=-0.5cm]{\small {$(ab)((cd)e)$}} -- (-0.5,2.2) ; \filldraw [black] (0.7,2.7) circle (0.75pt) (0.2,2.7) circle (0.75pt); \draw[dashed] (-0.75,0) -- (0.4,1)node[above,xshift=-0.4cm,yshift=-0.1cm] {\small{$(a(bc))(de)$}}--(1.1,1) node[above,xshift=0.6cm,yshift=-0.1cm] {\small{$a((bc)(de))$}}--(1.7,0.3); \filldraw [black](0.4,1) circle (0.75pt) (0.1,1.5) circle (0.75pt) (1.1,1) circle (0.75pt); \draw[dashed] (0.4,1) -- (0.1,1.5)node[below,xshift=-0.85cm,yshift=0.2cm] {\small{$((ab)c)(de)$}}--(-1.05,0.5); \filldraw [black](0.4,1) circle (0.75pt); \draw[dashed] (0.1,1.5)--(0.3,2.1) node[below,xshift=-0.5cm] {\small{$(ab)(c(de))$}}-- (0.8,2.1)node[below,xshift=0.5cm] {\small{$a(b(c(de)))$}}--(1.1,1); \filldraw [black](0.3,2.1) circle (0.75pt) (0.8,2.1)circle (0.75pt); \draw[dashed] (0.7,2.7) --(0.8,2.1) ; \draw[dashed] (0.2,2.7) --(0.3,2.1) ; \end{tikzpicture} \end{tabular} \end{center} \caption{Associahedron $\mathbf{K}_3$, $\mathbf{K}_4$ and $\mathbf{K}_5$} \label{s:kn} \end{figure} In early 1990s, Kapranov's original motivation for the study of $\mathbf{P}_n$ and $\mathbf{K}_n$ was provided by MacLane's coherence theorem for associativities and commutativities in monoidal categories \cite{ML63}. He found a ``hybrid-polytope'' that demonstrates interaction between commutativity and associativity, named permutoassociahedron and denoted by $\mathbf{KP}_n$. It is a polytope whose vertices correspond to all possible complete bracketings of permuted products of $n$ letters. Any $n$ objects in any symmetric (or braided) monoidal category give rise to a diagram of the shape $\mathbf{KP}_n$. He provided its realisation as a combinatorial \emph{CW-complex} and showed that it is an $(n - 1)$-ball. Furthermore, he realised $\mathbf{KP}_3$ and $\mathbf{KP}_4$ as convex polytopes (\cite{K93}). After Kapranov, Reiner and Ziegler gave such a realisation of $\mathbf{KP}_n$ for every $n\geqslant 2$~(\cite{RZ94}). \begin{figure}[h!h!h!] \begin{center} \begin{tabular}{cc} \begin{tikzpicture} [scale=1.2] \draw (-0.8,0) node[below]{$(ab)c$} -- (0.8,0)node[below]{$(ba)c$} -- (1.15,0.3) node[below, xshift=0.5cm, yshift=0.2cm]{$b(ac)$} -- (1.3,0.5)node[below, xshift=0.5cm,yshift=0.3cm]{$b(ca)$}--(1.25,0.9)node[below, xshift=0.55cm,yshift=0.2cm]{$(bc)a$}--(0.5,2.7)node[below, xshift=0.55cm,yshift=0.2cm]{$(cb)a$}--(0.2,3)node[above,xshift=0.2cm]{$c(ba)$}--(-0.2,3)node[above,xshift=-0.2cm]{$c(ab)$}--(-0.5,2.7)node[below, xshift=-0.55cm,yshift=0.2cm]{$(ca)b$}--(-1.25,0.9)node[below, xshift=-0.6cm,yshift=0.2cm]{$(ac)b$}--(-1.3,0.5)node[below,xshift=-0.5cm,yshift=0.3cm]{$a(cb)$}-- (-1.15,0.3)node[below,xshift=-0.5cm, yshift=0.2cm]{$a(bc)$}--cycle; \filldraw [black] (-0.8,0) circle (0.7pt) (0.8,0) circle (0.7pt) (1.15,0.3) circle (0.7pt) (-1.15,0.3) circle (0.7pt) (1.3,0.5) circle (0.7pt) (-1.3,0.5) circle (0.7pt) (1.25,0.9) circle (0.7pt) (-1.25,0.9) circle (0.7pt) (0.5,2.7) circle (0.7pt) (-0.5,2.7) circle (0.7pt) (0.2,3) circle (0.7pt) (-0.2,3) circle (0.7pt); \end{tikzpicture} & \begin{tikzpicture} [scale=1.2] \draw (-0.8,0) node[below]{$(ab)c$} -- (0.8,0)node[below]{$(ac)b$} -- (1.15,0.3) node[below, xshift=0.5cm, yshift=0.2cm]{$a(cb)$} -- (1.3,0.5)node[below, xshift=0.5cm,yshift=0.3cm]{$c(ab)$}--(1.25,0.9)node[below, xshift=0.55cm,yshift=0.2cm]{$(ca)b$}--(0.5,2.7)node[below, xshift=0.55cm,yshift=0.2cm]{$(cb)a$}--(0.2,3)node[above,xshift=0.2cm]{$c(ba)$}--(-0.2,3)node[above,xshift=-0.2cm]{$b(ca)$}--(-0.5,2.7)node[below, xshift=-0.55cm,yshift=0.2cm]{$(bc)a$}--(-1.25,0.9)node[below, xshift=-0.6cm,yshift=0.2cm]{$(ba)c$}--(-1.3,0.5)node[below,xshift=-0.5cm,yshift=0.3cm]{$b(ac)$}-- (-1.15,0.3)node[below,xshift=-0.5cm, yshift=0.2cm]{$a(bc)$}--cycle; \filldraw [black] (-0.8,0) circle (0.7pt) (0.8,0) circle (0.7pt) (1.15,0.3) circle (0.7pt) (-1.15,0.3) circle (0.7pt) (1.3,0.5) circle (0.7pt) (-1.3,0.5) circle (0.7pt) (1.25,0.9) circle (0.7pt) (-1.25,0.9) circle (0.7pt) (0.5,2.7) circle (0.7pt) (-0.5,2.7) circle (0.7pt) (0.2,3) circle (0.7pt) (-0.2,3) circle (0.7pt); \end{tikzpicture} \end{tabular} \end{center} \caption{2-permutoassociahedron $\mathbf{KP}_3$ and $PA_2$} \label{s:kp3pa2} \end{figure} However, for every $n\geqslant 4$, Kapranov's polytopes are not simple. Even in the case of 3-polytope $\mathbf{KP}_4$, we may notice some vertices that belong to more than three facets. Since these polytopes are hybrids of polytopes that are both simple, it was natural to search for a family of simple permutoassociahedra. It was firstly done by Petri\' c in \cite{P14}. In that paper, he described the \emph{simplicial complex} $C$ obtained by a specific iterative nested construction, whose opposite face semilattice is isomorphic to the face lattice (with $\emptyset$ removed) of the simple $n$-polytope $PA_n$. This polytope is obtained by truncations of $n$-permutohedron such that every vertex expands into an $(n-1)$-associahedron. Note that the vertices of $PA_n$ can be combinatorially given in the same way as the vertices of $\mathbf{KP}_{n+1}$, but $PA_{n}$ is simple in any dimension. The main difference in approach, which leads to the simplicity of the hybrid-polytope, is a choice of arrows that generate symmetry in a symmetric monoidal category. Namely, there are two types of the edges of $\mathbf{KP}_{n}$ corresponding either to a single reparenthesisation, or to a transposition of two adjacent letters that are grouped together. On the other hand, the edges of $PA_n$ are of the following two types: they also correspond to a single reparenthesisation, or to a transposition of two adjacent letters that are \textit{not} grouped together, i.e.\ to ``the most unexpected'' transposition of neighbours. This essential difference can be recognised even between $\mathbf{KP}_{3}$ and $PA_{2}$, which are both \emph{dodecagons} (see Figure~\ref{s:kp3pa2}). The 3-dimensional members of these two families of permutoassociahedra are illustrated in Figure~\ref{s:kp4pa3}\footnote{The left illustration is taken from \cite[Section~9.3]{RZ94}, while the right one is made using the graphical algorithm-editor \textit{Grasshopper} (\cite{R18}), a plug-in for Rhinoceros 3D modelling package (\cite{N18}).}. There is a nonsimple vertex of $\mathbf{KP}_{4}$ that corresponds to the word $(bc)(ad)$, which is connected by the edges with the vertices that correspond to the words $(bc)(da)$, $((bc)a)d$, $b(c(ad))$ and $(cb)(ad)$. The vertex of $\mathbf{KP}_{3}$ that correspond to the same word is adjacent just to the three vertices that correspond to the words $((bc)a)d$, $b(c(ad))$ and $(ba)(cd)$. \begin{figure}[h!h!h!] \begin{center} \includegraphics[width=0.77\textwidth]{3Dpermutoasociedri.JPG} \caption{3-permutoassociahedron $\mathbf{KP}_4$ and $PA_3$ } \label{s:kp4pa3} \end{center} \end{figure} Based on \cite{P14}, the family of simple permutoassociahedra was further investigated by Curien, Ivanovi\' c and Obradovi\' c (\cite[Section~5.2]{CIO17}) and also by Barali\' c, Ivanovi\' c and Petri\' c, who gave another explicit realisation with systems of inequalities representing halfspaces in $\mathbf{R}^{n+1}$. This realisation is denoted by $\mathbf{PA}_{n}$ in \cite{BIP17}. In Section~2, we briefly present the simplicial complex $C$ (the face lattice of a simple permutoassociahedron given combinatorially) and its geometrical realisation $\mathbf{PA}_{n}$. Since geometrical realisation of this family serves as a topological proof of Mac Lane's coherence, and since it is a generalisation of nestohedra defined by Postnikov as a Minkowski sum of standard simplices (\cite{PRW08}), it is natural to search for an alternative realisation of the simplicial complex $C$, which also uses Minkowski sum as a constructive tool. Minkowski-decomposability of every simple polytope was confirmed by \linebreak Gr\"unbaum more than fifty years ago in \cite[Chapter~15.1, p.\ 321]{G67}, and therefore, decomposability of $\mathbf{PA}_n$ is guaranteed. However, we are interested in finding very specific decomposition (see Definition~\ref{d_Mink_realizacija} below) of its normal equivalent. Besides Postnikov's representation of nestohedra, there is quite known family of \emph{zonohedra} (also called \emph{zonotopes}, \cite[Section~7.3]{Z95}), defined as Minkowski sum of line segments. There is no other specified representation of any significant family of polytopes, which uses Minkowski sums. Therefore, the main goal of the papar is to define an $n$-dimensional Minkowski-realisation of the simplicial complex $C$ for every $n\geqslant 2$, according to Definition~\ref{d_Mink_realizacija}, i.e.\ to find an $n$-polytope in $\mathbf{R}^{n+1}$, denoted by $ PA_{n,1}$, which is combinatorially equivalent to $PA_n$ and obtained by Minkowski sums of particular polytopes. The summands are such that each of them leads to the appropriate facet of the whole sum, i.e.\ to a truncation of the currently obtained partial sum. Before giving the general result, we investigate 2-dimensional Minkowski-realisation of $C$. Namely, in Section 4, we define a 2-polytope $^M PA_2$, normally equivalent to the polytopes $\mathbf{PA}_2$. Then, in Section 5, for every $n\geqslant 2$, we specify a family of $n$-polytopes $PA_{n,c}$ for $c\in(0,1]$ such that each member of the family is an $n$-dimensional Minkowski-realisation of $C$ and each one is normally equivalent to $\mathbf{PA}_n$. In particular, the most significant member of the family is $PA_{n,1}$, obtained for $c=1$, because all its summands are defined as convex hulls of points in $\mathbf{R}^{n+1}$. This is particularly beneficial with respect to a computational aspect. An additional advantage of the new approach using Minkowski sums, is in constructing an algorithm for realisation of the other families of polytopes that also generalise nestohedra. Namely, this research leads to the clear correlation between Minkowski sums and truncations of permutohedron. This implicitly delivers a general procedure for geometrical Minkowski construction of any hybrid of permutohedron and arbitrary nestohedron (permutohedron-based-nestohedron) such that every summand produces exactly one truncation, i.e.\ yields to the appropriate facet of the resulting Minkowski sum. Throughout the text cardinality of a set $X$ is denoted by $\lvert X \rvert$, $conv\{v_1,\ldots,v_k\}$ represents the convex hull of points $v_1,\ldots,v_k$, the dual space of a vector space $W$ is denoted by $W^\ast$, the set $\{1,\ldots,k\}$ is denoted by $[k]$, the subset relation is denoted by $\subseteq$, while the \textit{proper} subset relation is denoted by~$\subset$. Also, by \emph{comparability}, we mean the comparability with respect to inclusion. \section{nested sets} \label{s_nested} In this section, we present some known facts about a family of simplicial complexes and two-folded nested sets that are closely related to the face lattice of $PA_n$. Since the main goal of the paper is a new geometrical realisation, we omit the theory of nested set complexes in its full generality. We offer just a part of already established theory that is necessary for our research. The following expositions about complexes of nested sets for simplicial complexes and the definition of $\mathbf{PA}_n$ are inherited from \cite{FK04} and \cite{BIP17}, respectively. \begin{dfn} \label{d_geometrijska_realizacija}\emph{(cf.\ {\cite[p.\ 7]{BIP17}})} A polytope $P$ \emph{(geometrically) realises} a simplicial complex $K$, when the semilattice obtained by removing the bottom (the empty set) from the face lattice of $P$ is isomorphic to $(K,\supseteq)$. \end{dfn} \begin{dfn} \label{d_komb_ekvivalentni politopi}\emph{(cf.\ {\cite[p.\ 38]{G67}})} Two polytopes $P$ and $Q$ are \emph{combinatorially equivalent}, when their face lattices are isomorphic and it is denoted by $P\sim Q$. \end{dfn} \begin{dfn} \label{d_bilding_skup}\emph{(cf.\ {\cite[Definition~3.1]{BIP17}})} A collection $\mathcal{B}$ of non-empty subsets of a finite set $V$ containing all singletons $\{v\}, v\in V$ and satisfying that for any two sets $S_1, S_2\in \mathcal{B}$ such that $S_1\cap S_2 \neq\emptyset$, their union $S_1\cup S_2$ also belongs to $\mathcal{B}$, is called a \emph{building set} of $\mathcal{P}(V)$. Let $K$ be a simplicial complex and let $V_1,\ldots,V_m$ be the maximal simplices of $K$. A collection $\mathcal{B}$ of some simplices of $K$ is called a \emph{building set} of $K$, when for every $i\in [m]$, the collection $$\mathcal{B}_{V_i}=\mathcal{B} \cap \mathcal{P}(V_i)$$ is a building set of $\mathcal{P}(V_i)$. \end{dfn} For a family of sets $N$, $\{X_1,\ldots,X_m\}\subseteq N$ is an $N$\textit{-antichain}, when $m \geqslant 2$ and $X_1,\ldots,X_m$ are mutually incomparable. \begin{dfn}\label{d_nested_skup_u_odnosu_na_bilding_skup}\emph{(cf.\ {\cite[Definition~3.2]{BIP17}})} Let $\mathcal{B}$ be a building set of a simplicial complex $K$. We say that $N\subseteq \mathcal{B}$ is a \emph{nested set} with respect to $\mathcal{B}$, when the union of every $N$-antichain is an element of $K-\mathcal{B}$. \end{dfn} A subset of a nested set is again a nested set. Hence, the nested sets form a simplicial complex. Now, we proceed to the construction of the building set that gives rise to a simplicial complex of nested sets, which is associated to the simple permutoassociahedron. For $n\geqslant 1$, let $C_0$ be the simplicial complex $\mathcal{P}\bigl([n+1])-\{[n+1]\bigr\}$, the family of subsets of $[n+1]$ with at most $n$ elements. The simplicial complex $C_0$ is known as the \emph{boundary complex} $\partial \Delta^n$ of the abstract $n$-simplex $\Delta^n$. \begin{rem}\label{r_postnikov_i_nas_nested} The simplicial complex of all nested sets with respect to the building set $\mathcal{B}$ of $C_0$ is isomorphic to the simplicial complex obtained by the collection of all Postnikov's nested sets with removed maximal element from the building set. For more details, we refer to \cite[Section~3]{P14}. \end{rem} As a direct corollary of Proposition~9.10 in \cite{DP10}, we have the following claim. \begin{prop}\label{p_relaizacijaCo} For every building set $\mathcal{B}$ of $C_0$, there exists a nestohedron $P$ that realises the simplicial complex $K$ of all nested sets with respect to $\mathcal{B}$. \end{prop} Such a nestohedron is introduced at the end of Section 3, where we present a polytope $P_\mathcal{B}$ whose semilattice obtained by removing the bottom from its face lattice is isomorphic to $(K,\supseteq)$. This contravariant isomorphism is obtained in such a way that the maximal nested sets correspond to the vertices of the polytope, while the minimal nested sets, i.e.\ the elements of $\mathcal{B}$, correspond to its facets. In general, we say the following. \begin{dfn}\label{d_proplabel} Let $P$ be a polytope that realises a simplicial complex $K$ of all nested sets with respect to the building set $\mathcal{B}$ and let $f$ be the contravariant isomorphism. A facet $F$ of $P$ is \emph{properly labelled} by the element $B$ of $\mathcal{B}$ when $f(F)=\{B\}$. In other words, two facets of $P$ have a common vertex if and only if there is a nested set containing both their labels. \end{dfn} Now, let $\mathcal{B}_0=C_0-\{\emptyset\}$. According to Definition~\ref{d_bilding_skup}, $\mathcal{B}_0$ is a building set of $C_0$. A set $N\subseteq \mathcal{B}_0$ such that the union of every $N$-antichain belongs to $C_0-\mathcal{B}_0$ is called 0-\emph{nested}. According to Definition~\ref{d_nested_skup_u_odnosu_na_bilding_skup}, every 0-nested set is a nested set with respect to $\mathcal{B}_0$. Since a subset of a 0-nested set is also a 0-nested set, the family of all 0-nested sets makes a new simplicial complex $C_1$. Maximal 0-nested sets are of the form \[ \bigl\{ \{i_n,\ldots,i_1\},\ldots,\{i_n,i_{n-1}\},\{i_n\} \bigr\}, \] where $i_1,\ldots,i_n$ are mutually distinct elements of $[n+1]$. On the other hand, if we consider graph $\Gamma$ with $[n+1]$ as the set of vertices, the set of all members of $C_0$ that are non-empty and connected in $\Gamma$ make a (graphical) building set of $C_0$. Each of these building sets gives rise to a simplicial complex of nested sets, which can be realised as an $n$-nestohedron---a \emph{graph-associahedron} (\cite{CD06}). For example, $n$-permutohedron and $n$-associahedron, correspond to the \emph{complete graph} on $[n+1]$ and the \textit{path graph} $1-\ldots-(n+1)$, respectively. By the definition of $\mathcal{B}_0$, the simplicial complex of nested sets corresponding to the complete graph on $[n+1]$ is exactly $C_1$, i.e.\ $n$-permutohedron realises $C_1$. The maximal 0-nested sets correspond to the vertices of the permutohedron such that above-mentioned maximal 0-nested set is associated with the permutation \[ i_{n+1} i_1\ldots i_n \] of $[n+1]$, where $\{i_{n+1}\}=[n+1]-\{i_1,\ldots,i_n\}$. It is easy to see that there is $(n+1)!$ maximal 0-nested sets. The minimal nested sets of the form $\{B\}$ for $B \in \mathcal{B}_0$, correspond to the facets of the permutohedron. Therefore, two properly labelled facets have a common vertex if and only if their labels are comparable, according to Definition~\ref{d_proplabel}. Observe that $\mathcal{B}_0$ was defined in a way that covers the recipe for completely truncated simplex, i.e.\ the permutohedron. One can conclude that the next logical step on the road to the permutoassociahedron is to truncate further in order to stretch the interval. Starting from the permutohedron with the recipe that corresponds to the associahedron, we need a new building set of $C_1$ according to a path graph. Namely, for a maximal 0-nested set \[ \bigl\{ \{i_n,\ldots,i_1\},\ldots,\{i_n,i_{n-1}\},\{i_n\} \bigr\}, \] we observe the path graph with $n$ vertices and $n-1$ edges \[ \{i_n,\ldots,i_1\}-\ldots-\{i_n,i_{n-1}\}-\{i_n\}. \] A set of vertices of this graph is connected, when this is the set of vertices of a connected subgraph of this graph. Now, let $\mathcal{B}_1\subseteq C_1$ be the family of all sets of the form \[ \bigl\{ \{i_{k+l},\ldots,i_k,\ldots,i_1\},\ldots,\{i_{k+l},\ldots,i_k,i_{k-1}\},\{i_{k+l},\ldots,i_k\} \bigr\}, \] where $1\leqslant k\leqslant k+l\leqslant n$ and $i_1,\ldots,i_{k+l}$ are mutually distinct elements of $[n+1]$, i.e.\ let $\mathcal{B}_1$ be the set of all non-empty connected sets of vertices of the path graphs that correspond to all maximal 0-nested sets. By Definition~\ref{d_bilding_skup}, $\mathcal{B}_1$ is indeed a building set of the simplicial complex $C_1$. A set $N\subseteq \mathcal{B}_1$ is 1-\emph{nested} when the union of every $N$-antichain belongs to $C_1-\mathcal{B}_1$. By Definition~\ref{d_nested_skup_u_odnosu_na_bilding_skup}, every 1-nested set is a nested set with respect to $\mathcal{B}_1$. Again, one can verify that the family of all 1-nested sets makes a simplicial complex, which is denoted by $C$. For a polytope $P$ that realises $C$, the maximal 1-nested sets correspond to the vertices of $P$, while the singleton 1-nested sets correspond to its facets. Hence, from the definition of $\mathcal{B}_1$ and Definition~\ref{d_proplabel}, the next claim follows directly. \begin{prop}\label{p_labele_odrelaizatoraC} Let $P$ be a polytope that realises $C$, whose facets are properly labelled by the elements of $\mathcal{B}_1$. Two facets of $P$ have a common vertex if and only if their labels are comparable or the union of their labels is in $C_1-\mathcal{B}_1$. \end{prop} According to \cite{BIP17}, a geometrical realisation of $C$ is given as follows. For \linebreak$1\leqslant k\leqslant k+l\leqslant n$, let \[ \kappa(k,l)=\frac{3^{k+l+1}-3^{l+1}}{2}+\frac{3^k-3k}{3^n-n-1}. \] For an element $ \beta=\bigl\{\{i_{k+l},\ldots,i_k,\ldots,i_1\},\ldots,\{i_{k+l},\ldots,i_k,i_{k-1}\},\{i_{k+l},\ldots,i_k\} \bigr\}$, of $\mathcal{B}_1$, let $\pi_\beta$ be the equation (hyperplane in $\mathbf{R}^{n+1}$) \[ x_{i_1}+2x_{i_2}+\ldots+k(x_{i_k}+\ldots+x_{i_{k+l}})=\kappa(k,l). \] \begin{flushleft} For $\pi$ being the hyperplane $x_1+\ldots +x_{n+1}=3^{n+1}$ in $\mathbf{R}^{n+1}$, let \[ \mathbf{PA}_n=(\bigcap \{{\pi_\beta}^{\geqslant}\mid \beta\in \mathcal{B}_1\})\cap \pi. \] \end{flushleft} \begin{thm}\label{t_realizacijaZoran}\emph{(cf.\ {\cite[Theorem~5.2]{BIP17}})} $\mathbf{PA}_n \subseteq \mathbf{R}^{n+1}$ is a simple $n$-polytope that realises $C$. \end{thm} As a consequence of the previous theorem, Proposition~\ref{p_labele_odrelaizatoraC} and Lemma~5.5 in \cite{BIP17}, we have the following. \begin{cor} \label{c_labele_odPA_n} For every $\beta \in \mathcal{B}_1$, the halfspace ${\pi_\beta}^{\geqslant}$ is facet-defining for $\mathbf{PA}_n$. Moreover, if the facets of $\mathbf{PA}_n$ are properly labelled, then the facet $ \mathbf{PA}_n \cap {\pi_\beta}^{\geqslant} $ is labelled by $\beta$. \end{cor} \section{minkowski sum, normal cones and fans}\label{s_minkowski} Before we define our main task related to the last theorem, let us recall some facts about normal cones and fans, and also about Minkowski sum, which is one of the fundamental operation on point sets. The collection of all polytopes in $\mathbf{R}^n$ is denoted by $\mathcal{M}_n$ (following \cite{B08}). \begin{dfn}\label{d_suportfunction} \emph{(cf.\ {\cite[p.\ 36]{B08}})} The \emph{supporting function} of $P\in \mathcal{M}_n$ is the function \[s_P:\mathbf{R}^n\longrightarrow \mathbf{R}:s_P(x)=\max\limits_{y\in P}^{} \langle x,y\rangle. \] \end{dfn} For every face $F$ of a polytope $P$, there is a supporting hyperplane through $F$. The set of outward normals to all such hyperplanes spans a polyhedral cone, the normal cone at $F$ (see Figure~\ref{s:skica_fanovi}). More formal definition follows. \begin{dfn}\label{d_normalconefan} \emph{(cf.\ {\cite[p.\ 193]{{Z95}}})} For a given face $F$ of a $d$-polytope $P\in \mathcal{M}_n $, the \emph{normal cone} to $P$ at $F$ is the collection of linear functionals $v$ in $(\mathbf{R}^d)^\ast$, whose maximum on $P$ is achieved on all the points in the face $F$, i.e. \[ N_F(P)=\{v\in (\mathbf{R}^d)^\ast \mid \langle v,y\rangle = s_P(v), \; \forall y\in F\}. \] 1-dimensional normal cones are called \emph{rays}. The \emph{normal fan} of $P$ is the collection \[ \mathcal{N}(P)=\{N_F(P) \mid F \emph{\text{ is a non-empty face of }} P\}. \] \end{dfn} The normal fan $\mathcal{N}(P)$ is \emph{complete} for every $P \in \mathcal{M}_n$, which means that the union of all normal cones in $\mathcal{N}(P)$ is $\mathbf{R}^n$. As we only consider normal fans, the word ``normal'' will be assumed and omitted for brevity from now on. Also, an arbitrary convex cone in $\mathbf{R^{n}}$ of dimension $d \leqslant n$ is written as $d$-cone. \begin{rem}\label{r_preskmaksimalnihnijemaksimalan} The intersection of any two normal cones in $\mathcal{N}(P)$ at two faces $F_1$ and $F_2$ is the common face for each of the cones, which is also the normal cone at the smallest face of $P$ that contains both $F_1$ and $F_2$. \end{rem} \begin{exm}\label{e_fanduzi} The fan of a single line segment $L$ is the set $\{H,H^{\geqslant},H^{\leqslant}\}$, where $H$ is a hyperplane normal to $L$. \end{exm} \begin{dfn} \label{d_norm_ekvivalentni politopi} \emph{(cf.\ {\cite[p.\ 193]{{Z95}}})} Two polytopes $P,Q \in \mathcal{M}_n$ are called \emph{normally equivalent} when they have the same fan: \[ P\simeq Q \Leftrightarrow \mathcal{N}(P) = \mathcal{N}(Q). \] \end{dfn} \begin{flushleft} In literature, normally equivalent polytopes are also called ``analogous'', ``strongly isomorphic'' or ``related''. The term ``normally equivalent'' is used in \cite{G67} and \cite{Z95}. An example of two normally equivalent polytopes is given in Figure~\ref{s:javaview}. \end{flushleft} One can verify that $P\simeq Q \Rightarrow P\sim Q$, but the other direction does not hold. If $Q$ can be obtained from $P$ by parallel translations of the facets, then the outward normals to the corresponding facets of $P$ and $Q$ have the same directions, and then, the rays in $\mathcal{N}(Q)$ and $\mathcal{N}(P)$ coincide. Therefore, the next proposition holds. \begin{prop} \label{p_paralelni_feseti} Two combinatorially equivalent polytopes are normally equivalent if and only if their corresponding facets are parallel. \end{prop} \begin{rem}\label{r_correspondingfacettruncation} If $P$ is a polytope in $\mathcal{M}_n$ defined as the intersection of the following $m$ facet-defining halfspaces \[ \langle a_i,x\rangle\geqslant b_i, \; \; 1\leqslant i\leqslant m, \] then a truncation of $P$ in its proper face $F$, \emph{tr}$_FP=P\cap \pi^{\geqslant}$, is the intersection of the following $m+1$ facet-defining halfspaces \[ \langle a_i,x\rangle\geqslant b_i, \; \; 0\leqslant i\leqslant m, \] where $\langle a_0,x\rangle\geqslant b_0$ defines the halfspace $\pi^{\geqslant}$. The previous proposition implies the following. For every polytope $Q$ which is normally equivalent to $\emph{tr}_FP$, there exists $c \in \mathbf{R}^{m+1}$ with entries $c_i$ such that $Q$ is the intersection of the following $m+1$ facet-defining halfspaces \[ \langle a_i,x\rangle\geqslant c_i, \; \; 0\leqslant i\leqslant m. \] Hence, if $f$ is a facet of $Q$ lying in the hyperplane $\langle a_0,x\rangle= c_0$ parallel to $\pi$, then there is a bijection $$\mu:\mathcal{F}(Q)-\{f\}\rightarrow \mathcal{F}(P)$$ mapping facets to parallel facets. We say that facets of polytopes $P$ and $Q$ correspond to each other when they correspond according to $\mu$. Also, $f$ is called the new appeared facet of $Q$. \end{rem} \begin{lem}\label{l_konusiparalelnetrunkacije} Let \emph{tr}$_FP=P \cap \pi^{\geqslant}$ be a truncation of a given polytope $P \in \mathcal{M}_n$ in its face $F$. For a vertex $u\in F$, let $\{w_i\mid i \in [k]\}$ be the set of vertices of $P$ adjacent to $u$ but not contained in $F$. Also, for every $i \in [k]$, let $E_i=\overline{uw_i}$ and $v_i= E_i\cap \pi$. The union of all normal cones in $\mathcal{N}(\emph{tr}_FP)$ at the vertices contained in $\pi$ is equal to the union of all normal cones in $\mathcal{N}(P)$ at the vertices contained in $F$. Moreover, if the truncation is parallel, then $$ N_{u}(P)=\bigcup\limits_{i \in [k]}N_{v_i}(\emph{tr}_FP).$$ \end{lem} \begin{proof} Without lose of generality, suppose that $P$ is full dimensional. Let $a_0$ be an outward normal to the truncation hyperplane $\pi$. By the definition of truncation, $N_v(\text{tr}_FP)=N_v(P)$ for every vertex $v$ which is common for both polytopes. Hence, the first part of the claim follows directly from the fact that both fans are complete. Now, let $i$ be an arbitrary element of $[k]$. For every spanning ray $a$ of $N_{v_i}(\text{tr}_FP)$ such that $a \neq a_0$, there is a facet of $P$ which contains $E_i$ and whose an outward normal is $a$, and therefore, $a$ is contained in the cone $N_{u}(P)$. Since the truncation is parallel, there is a hyperplane parallel to $\pi$ which defines $F$, i.e.\ the functional $a_0$ attains the maximum value at $F$ over all points in $P$. It implies that $a_0$ is contained in the normal cone $N_F(P)$. According to Remark~\ref{r_preskmaksimalnihnijemaksimalan}, $N_F(P)$ is common face for all normal cones in $P$ at the vertices of $F$. Therefore, $a_0$ is contained in each of them. In particular, $a_0 \in N_{u}(P)$. We conclude that every spanning ray of $N_{v_i}(\text{tr}_FP)$ is contained in $N_{u}(P)$, which implies $N_{v_i}(\text{tr}_FP)\subseteq N_{u}(P)$. \end{proof} \begin{dfn}\label{d_minkowski} \emph{(cf.\ {\cite[Definition~1.1.]{B08}})} Let $A,B\subseteq \mathbf{R}^n$. The \emph{Minkowski sum} of $A$ and $B$ is the set \[ A+B=\{x\in \mathbf{R}^n \mid x=x_1+x_2, \; x_1\in A, \; x_2\in B\}. \] We call $A$ and $B$ the \emph{summands} of $A+B$. \end{dfn} The Minkowski sum of two polytopes is again a polytope, thus we can use this operation as a classical geometrical constructive tool, which allows us to produce new polytopes from known ones. Moreover, this operation establishes an abelian monoid structure on $\mathcal{M}_n$, where neutral element is the point $0=(0,\ldots,0)\in \mathbf{R}^n$. Note that $\mathcal{M}_n$ has the structure of $\mathbf{R}$-module, i.e.\ for given $\lambda \in \mathbf{R}$ and $P \in \mathcal{M}_n$ \[ \lambda P =\{\lambda x \in \mathbf{R}^n \mid x\in P\}. \] \begin{rem} \label{r_lambdaP} Scaling a polytope does not change its fan, i.e.\ for every $\lambda > 0$ and $P\in \mathcal{M}_n$, $\lambda P \simeq P$ holds. \end{rem} \begin{rem}\label{r_trivialMInk} For every $0\leqslant \lambda \leqslant 1$ and $P \in \mathcal{M}_n$, $\lambda P$ is trivially a summand of $P$ for $$P=\lambda P+(1-\lambda)P.$$ \end{rem} Through the paper, wherever we are talking about addition of polytopes, we refer to Minkowski sum. \begin{dfn}\label{d_summand_produces_a_facet} A polytope $P_2$ is a \emph{truncator summand} for a polytope $P_1$, when there is a truncation $\emph{tr}_FP_1$ of $P_1$ in its proper face $F$ such that $$P_1+P_2 \simeq \emph{tr}_FP_1.$$ \end{dfn} \begin{dfn}\label{d_truncator_set} An indexed set of polytopes $\{P_i\}_ {i\in [m]}$ is a \emph{truncator set of summands} for a polytope $S_0$, when for every $i\in [m]$, $P_i$ is a truncator summand for a polytope $S_{i-1}$, where $S_i=S_{i-1}+P_i$, $i\in[m]$. \end{dfn} Now, let $e_i$, $i \in [n+1]$, be the endpoints of the standard basis vectors in $\mathbf{R}^{n+1}$ and let $$\Delta_I = conv\{e_i \mid i \in I\}$$ be the standard $(\lvert I \rvert -1)$-simplex for any given set $I\subseteq [n+1]$. \begin{dfn}\label{d_Mink_realizacija} Let $K$ be the simplicial complex of all nested sets with respect to the building set $\mathcal{B}$ and let $\{\mathcal{A}_1,\mathcal{A}_2\}$ be a partition of $\mathcal{B}$ such that the block $\mathcal{A}_1$ is the collection of all singleton elements of $\mathcal{B}$. An n-polytope $P$ is an $n$-dimensional \emph{Minkowski-realisation} of $K$ when the following conditions are satisfied: \begin{enumerate} \item[\emph{(i)}] $P$ realises $K$; \item[\emph{(ii)}] there exists a function $\varphi:\mathcal{B}\longrightarrow \mathcal{M}_{n+1}$ such that $$P=\Delta_{[n+1]}+\sum\limits_{\beta \in \mathcal{B}} \varphi(\beta);$$ \item[\emph{(iii)}] for an indexing function $x: [m]\longrightarrow \mathcal{A}_2$ such that $\vert x(i)\rvert \geqslant \lvert x(j)\rvert$ for every $i<j$, the indexed set $\{P_i\}_{i \in [m]}$, where $P_i=\varphi(x(i))$, is a truncator set of summands for the partial sum $$\Delta_{[n+1]}+\sum_{\beta\in \mathcal{A}_1} \varphi(\beta).$$ \end{enumerate} \end{dfn} The main question of the paper follows. It is related to the simplicial complex $C$ defined in the previous section. \begin{que}\label{q_pitanje} How to define a polytope in $\mathbf{R}^{n+1}$, which is an $n$-dimensional Minkowski-realisation of $C$ and which is normally equivalent to $\mathbf{PA}_n$? \end{que} In Section 4, we answer the question in the cases $n=2$, while the general answer for every dimension is given in Section 5. Moreover, we define a family of $n$-polytopes with requested properties. It is well known that every simple polytope except simplex is \emph{decomposable} (\cite[Chapter~15.1, p.\ 321]{G67}), i.e.\ it can be represented as a Minkowski sum in a nontrivial manner such that the representation possess a summand, which is not positively homothetic to the whole sum (see Remark~\ref{r_trivialMInk}). Thus, decomposability of $\mathbf{PA}_n$ is guaranteed, i.e.\ a nontrivial representation of the family of simple permutoassociahedra as a Minkowski sum exists. But, our goal is very specific representation according to Definition~\ref{d_Mink_realizacija}, and we are searching for a polytope, which does not need to be congruent to $\mathbf{PA}_n$. Still, by the additional request of Question~\ref{q_pitanje}, they have to be normally equivalent. In that manner, requesting normal equivalence between polytopes, we stay on the bridge between coincidence and combinatorial equivalence. Let us recall the following. \begin{prop}\label{prop_minkowski_svojstva1} \emph{(cf.\ {\cite[Lemma~1.4.]{B08}})} If $P_1=conv\{v_1,\ldots,v_k\}$ and \\$P_2=conv\{w_1,\ldots,w_l\}$ are polytopes in $\mathcal{M}_n$, then \[ P_1+P_2=conv\{v_1+w_1,\ldots,v_i+w_j,\ldots,v_k+w_l\}. \] \end{prop} It follows that for every point $A\in \mathbf{R}^n$, $P+\{A\}$ is a translate of the polytope $P$. Throughout the text, for two given points $P$ and $T_i$, let $\{P_i\}=\{P\}+\{T_i\}$. \begin{cor}\label{prop_minkowski_svojstva2} The following holds in $\mathcal{M}_n$: \begin{enumerate} \item[\emph{(i)}] if $P_1 = P_2$, up to translation, then $P+P_1=P+P_2$, up to translation; \item[\emph{(ii)}] if $P=P_1+P_2$, then $\dim(P)\geqslant \max\{\dim(P_1),\dim(P_2)\}$. \end{enumerate} \end{cor} Unlike convexity, simplicity is often violated, i.e.\ the sum of simple polytopes often fails to be simple. Although Minkowski sum is a very simple geometrical operation, its result is not often intuitively predictable and obvious, especially in the case of summing a collection of polytopes of various dimensions or polytopes with a lot of vertices. Our Question~\ref{q_pitanje} is related with development of the following Postnikov's idea implemented in his Minkowski-realisation of the family of nestohedra (see \cite{P09}). Let $\mathcal{B}$ be a connected building set of the set $[n+1]$ such that $[n+1]\in \mathcal{B} $. For any set $B \in \mathcal{B}$, we consider the $(\lvert B \rvert -1)$-simplex $\Delta_B$, and the sum \[ P_\mathcal{B} = \sum_{B\in \mathcal{B}} \Delta_B. \] It is shown that this sum is a simple $n$-polytope, which can be obtained by successive parallel truncations of an $n$-simplex, and vice versa, for a nestohedron $P$ and the corresponding building set $\mathcal{B}$, we have $P\sim P_\mathcal{B}$ (see \cite[Theorem~7.4.]{P09}). Note that the following partial sum $$\Delta_{[n+1]}+\sum_{\substack{B\in \mathcal{B} \\ \lvert B \rvert =1}} \Delta_B$$ is a translate of the $n$-simplex $\Delta_{[n+1]}$ by the point $(1,\ldots,1)\in \mathbf{R}^{n+1}$. For every totally ordered indexing set $I$ of the set of all non-singleton elements of $\mathcal{B}$, such that $\lvert B_i\rvert\geqslant \lvert B_j\rvert$ for $i<j$, the set $\{\Delta_{B_i}\}_{i \in I}$ is a truncator set of summands for the translated simplex. Therefore, according to Definition~\ref{d_Mink_realizacija}, this is indeed an $n$-dimensional Minkowski-realisation of the simplicial complex of all nested sets with respect to $\mathcal{B}-\{[n+1]\}$ (see Remark~\ref{r_postnikov_i_nas_nested}). \section{the $2$-permutoassociahedron as a minkowski sum} \label{s_dvaD} In this section we answer Question ~\ref{q_pitanje} in the case $n=2$. By Theorem~\ref{t_realizacijaZoran}, the dodecagon $\mathbf{PA}_{2}$ realises $C$ (see Figure~\ref{s:kp3pa2}). Thus, at the very beginning of this section, we could deliver 12 polytopes whose sum with $\Delta_{[3]}$ is a dodecagon normally equivalent to $\mathbf{PA}_{2}$ and show that all conditions of Definition~\ref{d_Mink_realizacija} are satisfied. Instead, we choose another approach, which leads us to the general criteria for finding these summands. As we shall see later in higher dimensions, the most of required summands are neither simplices, nor their sums. Moreover, they need not be even simple polytopes. According to Section 2, we start with the triangle, i.e.\ the simplicial complex \[ C_0=\bigl \{ \emptyset,\{1\},\{2\}, \{3\},\{1,2\},\{1,3\},\{2,3\} \bigr \} \] and its building set \[ \mathcal{B}_0=\bigl \{\{1\},\{2\}, \{3\},\{1,2\},\{1,3\},\{2,3\} \bigr \}, \] which leads us to the simplicial complex $C_1$ realised by 2-permutohedron, i.e.\ hexagon. There are the following 6 maximal 0-nested sets: \begingroup \small \[ \bigl\{\{1,2\},\{1\}\bigr\},\; \bigl\{\{1,2\},\{2\}\bigr\},\; \bigl\{\{1,3\},\{1\}\bigr\}, \bigl\{\{1,3\},\{3\}\bigr\},\; \bigl\{\{2,3\},\{2\}\bigr\},\; \bigl\{\{2,3\},\{3\}\bigr\}, \] \endgroup and thence, \[\begin{array}{rll} \mathcal{B}_1=\Bigl\{ & \bigl\{\{1\}\bigr\},\;\bigl\{\{2\}\bigr\},\;\bigl\{\{3\}\bigr\},\; \bigl\{\{1,2\}\bigr\},\; \bigl\{\{1,3\}\bigr\},\; \bigl\{\{2,3\}\bigr\} \; \bigl\{\{1,2\},\{1\}\bigr\}, & \\ \; & \bigl\{\{1,2\},\{2\}\bigr\},\; \bigl\{\{1,3\},\{1\}\bigr\}, \; \bigl\{\{1,3\},\{3\}\bigr\},\; \bigl\{\{2,3\},\{2\}\bigr\},\; \bigl\{\{2,3\},\{3\}\bigr\}& \Bigr\}, \end{array}\] i.e.\ $\mathcal{B}_1=C_1-\{\emptyset\}$. According to the elements of the building set, we have the following set of 12 halfspaces: \[\left\{\begin{array}{lllll} \text{ } & \text{ }& x_{i_2}& \geqslant & 3 \\ x_{i_1} & + & x_{i_2}& \geqslant & 9 \\ x_{i_1} & + & 2x_{i_2}& \geqslant & 12.5 , \end{array}\right.\] where $i_1$ and $i_2$ are distinct elements of the set $[3]$. The simplicial complex $C$ is realised by the polytope $\mathbf{PA}_2$ defined as the intersection of the previous set of facet-defining halfspaces and the hyperplane $x_1+x_2+x_3=27$. It can be verified that $\mathbf{PA}_2$ is really a dodecagon in $\mathbf{R}^3$. Very efficient tool for such a verification is \texttt{polymake}, an open source software for researches in polyhedral geometry. This computational programme offers a lots of systems, around which one could deal with polytopes in different ways. In particular, there is a possibility to define a polytope as a Minkowski sum of already known ones. For representing $\mathbf{PA}_2$ (see Figure~\ref{s:javaview} left), it is enough to use convex hull codes cdd \cite{F05} and \texttt{polymake}'s standard tool for interactive visualisation called JavaView \cite{PHPR99}. We extensively use \texttt{polymake} for all verifications that appear in this section. \begin{figure}[h!h!h!] \begin{center} \includegraphics[width=0.8\textwidth]{2DpermutoasociedriJAVAVIEW.JPG} \caption{JavaView visualisation of $\mathbf{PA}_2$ and $^M PA_2$} \label{s:javaview} \end{center} \end{figure} By Corollary~\ref{c_labele_odPA_n}, all the edges of $\mathbf{PA}_2$ are properly labelled by the elements of $\mathcal{B}_1$ such that the edge labelled by $\beta$ is contained in $\pi_{\beta}$. Also, by Proposition~\ref{p_labele_odrelaizatoraC}, two edges have a common vertex if and only if their labels are comparable. According to Question~\ref{q_pitanje} and Definition~\ref{d_Mink_realizacija}, our task is to establish a function $\varphi:\mathcal{B}_1\longrightarrow\mathcal{M}_3$ such that if $$ ^M PA_2=\Delta_{[3]}+\sum_{\beta\in \mathcal{B}_1} \varphi(\beta),$$ then $ ^M PA_2$ is also a dodecagon satisfying Definition~\ref{d_Mink_realizacija}(iii). These two dodecagons have to be normally equivalent. If their edges are properly labelled by the elements of $\mathcal{B}_1$, Proposition~\ref{p_paralelni_feseti} implies that the equilabelled edges have to be parallel. Let the image of every singleton $\beta\in\mathcal{B}_1$ be the corresponding simplex, i.e.\ \[ \varphi(\beta)= \Delta_{\cup \beta}. \] From the end of the previous section, we have that the partial sum $$S=\Delta_{[3]}+\sum_{\substack{\beta\in \mathcal{B}_1\\ \lvert \beta \rvert =1}} \varphi(\beta)=\Delta_{[3]}+\Delta_{\{1\}}+\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{1,2\}}+\Delta_{\{1,3\}}+\Delta_{\{2,3\}}$$ is a completely truncated triangle in $\mathbf{R}^3$, which is a 2-dimensional Minkowski-realisation of the simplicial complex $C_1$. It is a hexagon with three pairs of parallel sides whose edges can be properly labelled by the corresponding $B\subset [3]$. Let us label these edges by $\{B\}$, i.e.\ by the corresponding singleton elements of $\mathcal{B}_1$. Note that they are parallel to the same labelled edges of $\mathbf{PA}_2$. \begin{figure}[h!h!h!] \begin{center} \begin{tikzpicture} [baseline, scale=1.3] \filldraw (-1.5,1) circle (1pt) (-0.5,1.5) circle (1pt) (0.5,1.5) circle (1pt) (1.5,1) circle (1pt); \draw (-2,0)--(-1.5,1)--(-0.5,1.5)--(0.5,1.5)--(1.5,1)-- (2,0); \draw [dashed] (-1.5,1)--(-1.25,1.5)--(-0.5,1.5) (0.5,1.5)--(1.25,1.5)--(1.5,1) ; \draw (-2.2,0.4) node[] {\scriptsize $\bigl\{\{1\}\bigr\}$} (0,1.7) node {\scriptsize $\bigl\{\{1,2\}\bigr\}$} (2.2,0.4) node{\scriptsize $\bigl\{\{2\}\bigr\}$} (-1.6,1.3) node {\scriptsize $\bigl\{\{1,2\},\{1\}\bigr\}$} (1.6,1.3) node {\scriptsize $\bigl\{\{1,2\},\{2\}\bigr\}$}; \end{tikzpicture} \end{center} \caption{Properly labelled facets} \label{s:skica_labele} \end{figure} It remains to specify images of six non-singleton elements of $\mathcal{B}_1$, which are of the form $\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}$. According to Definition~\ref{d_Mink_realizacija}(iii), for any order of the summands $\varphi\bigl(\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}\bigr)$, each of them should be a truncator summand for the currently obtained partial sum. Let $\{P_i\}_{i \in [6]}$ be an indexed set of all these summands and let us consider all partial sums obtained by adding the elements of this set to the hexagon $S$, step by step. We start with $S_0=S$ and consider every partial sum $S_i=S_{i-1}+P_i$, $i\in[6]$. Notice that $^MPA_2=S_6$. Since for every $i\in[6]$, there is a truncation of $S_{i-1}$ in some vertex, normally equivalent to $S_{i}$, at $i$th step we can label the edges of $S_i$ by the elements of $\mathcal{B}_1$ in the following way: the corresponding edges of $S_{i}$ and $S_{i-1}$ are equilabelled, while the new appeared edge is labelled by some new label $\beta_i$ (see Remark~\ref{r_correspondingfacettruncation}). At the end, in order to have all the edges of $S_6$ properly labelled, the following hold for every $i \in[6]$: if $P_i$ corresponds to $\varphi\bigl(\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}\bigr)$, then $S_i \simeq$ tr$_VS_{i-1}$, where $V$ is the common vertex of the edges labelled by $\bigl\{\{i_2,i_1\}\bigr\}$ and $\bigl\{\{i_2\}\bigr\}$, and $\beta_i=\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}$ (see Figure~\ref{s:skica_labele}). Moreover, since the dodecagons are normally equivalent, for every edge of the partial sum $S_i$ there is a parallel equilabelled edge of $\mathbf{PA}_2$. The proof of the following proposition is quite different from what we discuss here, so it is given later in Section 5. \begin{prop} \label{p_linijaNE} If $\varphi$ is a function satisfying the conditions of Definition~\ref{d_Mink_realizacija}, then for every two distinct elements $i_1,i_2\in[3]$, $\varphi\bigl(\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}\bigr)$ is not a line segment. \end{prop} From the previous proposition and Corollary~\ref{prop_minkowski_svojstva2}(ii), for every two distinct elements $i_1,i_2\in[3]$, $\varphi\bigl(\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}\bigr)$ is a polygon. Since the order of summands is irrelevant, we start with $\varphi\bigl(\bigl\{\{1,2\},\{1\}\bigr\}\bigr)$ being a triangle $T_1T_2T_3$, where $T_1(a_1,b_1,c_1)$, $T_2(a_2,b_2,c_2)$ and $T_3(a_3,b_3,c_3)$ are points in $\mathbf{R}^3$. This triangle is a truncator summand for $S$ such that $S+T_1T_2T_3$ is a heptagon normally equivalent to the heptagon obtained from $S$ by truncation in the vertex common for the edges labelled by $\bigl\{\{1,2\}\bigr\}$ and $\bigl\{\{1\}\bigr\}$. Instead to continue with the whole sum $S$, we consider its partial sum $$\Delta_{[3]}+\Delta_{\{1\}} +\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{1,2\}},$$ which is the trapezoid $ABCD$ given in Figure~\ref{s:skica_12}. Namely, since the whole sum $S$ is a Minkowki-realisation of $C_1$, its summands indexed by non-singleton sets make a truncator set of the triangle. Hence, we are able to remove some of them such that the sum of the remaining summands has the vertex where the edges labelled by $\{1,2\}$ and $\{1\}$ meet (the vertex $D$). \begin{figure}[h!h!h!] \begin{center} \begin{tikzpicture} [baseline, scale=1.25] \filldraw [black] (0,1.5) circle (1pt) (-1,-0.5) circle (1pt) (1,-0.5) circle (1pt); \filldraw [red] (-1,0.5) circle (1pt) (1,0.5) circle (1pt) (2,-1.5) circle (1pt) (-2,-1.5) circle (1pt) (0,-1.5) circle (1pt); \draw (0,1.5)--(-1,-0.5)--(1,-0.5)--(0,1.5); \begin{scope}[>=latex] \draw[->,dashed] (0,1.5) -- (-1,0.5); \draw[->,dashed] (0,1.5) -- (1,0.5); \draw[->,dashed] (-1,-0.5) -- (-2,-1.5); \draw[->,dashed] (-1,-0.5) -- (0,-1.5); \draw[->,dashed] (1,-0.5) -- (0,-1.5); \draw[->,dashed] (1,-0.5) -- (2,-1.5); \end{scope} \draw [red] (-1,0.5) node[above, xshift=-0.5cm] {\scriptsize $D(1,2,2)$} --(1,0.5) node [above, xshift=0.5cm]{\scriptsize $C(2,1,2)$}--(2,-1.5) node[below, xshift=0.2cm] {\scriptsize $B(3,1,1)$}--(-2,-1.5) node[below,xshift=-.2cm] {\scriptsize $A(1,3,1)$}-- (-1,0.5); \draw[red] (-1.8,-0.2) node[] {\scriptsize $\{1\}$} (0,0.7) node {\scriptsize $\{1,2\}$} (1.8,-0.2) node {\scriptsize $\{2\}$} (0,-1.8) node {\scriptsize $\{3\}$}; \end{tikzpicture} \end{center} \caption{The partial sum $\Delta_{[3]}+\Delta_{\{1\}}+\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{1,2\}}$} \label{s:skica_12} \end{figure} \begin{figure}[h!h!h!] \begin{center} \begin{tikzpicture} [baseline, scale=1.4] \draw (1,1) node[above, xshift=0.8cm, yshift=-0.15cm] {\scriptsize $C(2,1,2)$} -- (2,-1) node [below, xshift=0.8cm,yshift=0.2cm ]{\scriptsize $B(3,1,1)$} -- (-2,-1) node[below, xshift=-0.6cm, yshift=0.2cm] {\scriptsize $A(1,3,1)$} -- (-1,1) node[above,xshift=-.6cm, yshift=-0.1cm] {\scriptsize $D(1,2,2)$} -- cycle; \begin{scope}[>=latex] \draw[->,dashed] (2,-1) -- (3,-1.5); \draw[->,dashed] (-2,-1) -- (-1,-1.5); \draw[->,dashed] (-1,1) -- (0,0.5); \draw[->,dashed] (1,1) -- (2,0.5); \draw[->,dashed] (2,-1) -- (2.5,-0.5); \draw[->,dashed] (-2,-1) -- (-1.5,-0.5); \draw[->,dashed] (1,1) -- (1.5,1.5); \draw[->,dashed] (-1,1) -- (-0.5,1.5); \draw[->,dashed] (2,-1) -- (1.2,-1.5); \draw[->,dashed] (-2,-1) -- (-2.8,-1.5); \draw[->,dashed] (1,1) -- (0.2,0.5); \draw[->,dashed] (-1,1) -- (-1.8,0.5); \end{scope} \filldraw[red] (-2.8,-1.5) circle (1pt) (-1.8,0.5) circle (1pt) (1.5,1.5)circle (1pt) (-0.5,1.5) circle (1pt) (3,-1.5) circle (1pt); \draw[red] (-2.8,-1.5) -- (3,-1.5) -- (1.5,1.5) -- (-0.5,1.5) -- (-1.8,0.5) -- cycle; \filldraw (0.2,0.5) circle (1pt) (1.2,-1.5) circle (1pt) (2.5,-0.5) circle (1pt) (2,0.5) circle (1pt) (0,0.5) circle (1pt) (-1,-1.5) circle (1pt) (-1.5,-0.5) circle (1pt); \draw (0.4,0.3) node {\scriptsize $C_1$}; \draw (-0.2,0.3) node {\scriptsize $D_3$}; \draw (-1,-1.7) node {\scriptsize $A_3$}; \draw (2.2,0.4) node {\scriptsize $C_3(2+a_3,1+b_3,2+c_3)$}; \draw (0.4,-1.7) node {\scriptsize $B_1(3+a_1,1+b_1,1+c_1)$}; \draw (-3.2,-1.7) node {\scriptsize $A_1(1+a_1,3+b_1,1+c_1)$}; \draw (-2.9,0.5) node {\scriptsize $D_1(1+a_1,2+b_1,2+c_1)$}; \draw (2,1.7) node {\scriptsize $C_2(2+a_2,1+b_2,2+c_2)$}; \draw (2.6,-0.3) node {\scriptsize $B_2$}; \draw (-0.6,-0.3) node {\scriptsize $A_2(1+a_2,3+b_2,1+c_2)$}; \draw (-0.7,1.7) node {\scriptsize $D_2(1+a_2,2+b_2,2+c_2)$}; \draw (2.8,-1.7) node {\scriptsize $B_3(3+a_3,1+b_3,1+c_3)$}; \end{tikzpicture} \end{center} \caption{ $\Delta_{[3]}+\Delta_{\{1\}}+\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{1,2\}}+T_1T_2T_3$} \label{s:skica_12_1} \end{figure} In order to find $T_1T_2T_3$, we focus on an appropriate ``local polytope'', e.g.\ the trapezoid $ABCD$. We assume that $T_1T_2T_3$ is a truncator summand for $ABCD$, which means that the polytope $$ABCD+T_1T_2T_3=conv\{A_1,A_2,A_3,B_1,B_2,B_3,C_1,C_2,C_3,D_1,D_2,D_3\},$$ is a pentagon normally equivalent to the polytope obtained from the trapezoid by truncation in the vertex $D$. Also, $-(2,1,0)$ should be an outward normal vector to the new appeared edge. Let us assume that $D_1D_2$ is that edge such that $D_1$ and $D_2$ are also common for the facets with the outward normal vectors $-(1,0,0)$ and $-(1,1,0)$, respectively (see Figure~\ref{s:skica_12_1}). We also assume that $A_1,B_3$ and $C_2$ are the vertices of $ABCD+T_1T_2T_3$ such that $A_1$ is common for the edges with the outward normals $-(1,0,0)$ and $-(0,0,1)$, $B_3$ is common for the edges with the outward normal vectors $-(0,1,0)$ and $-(0,0,1)$ and $C_2$ is common for those ones with the outward normal vectors $-(0,1,0)$ and $-(1,1,0)$. It implies the following system of equations: \[ \systeme{ a_1+b_1+c_1=a_2+b_2+c_2, a_1+b_1+c_1=a_3+b_3+c_3,2a_1+b_1=2a_2+b_2,c_1=c_3,b_2=b_3. } \] The first two follow from the fact that all translates $A_iB_iC_iD_i$ of the trapezoid $ABCD$, $i \in [3]$, have to lie in the same plane parallel to the plane $x_1+x_2+x_3=5$, in which $ABCD$ lies. Since $A_2,A_3,B_1,B_2,C_1,C_3 \in conv\{A_1,B_3,C_2,D_1,D_2\}$, we have the following set of inequalities: \[\begin{array}{lll} a_1 < a_2 \leqslant a_3, & b_2 \leqslant b_3 < b_1, & c_1\leqslant c_3 < c_2 . \end{array}\] Solving the system, we get that the points $T_i$ are $$T_1(a_1,b_1,c_1), \; T_2\bigl(\frac{2a_1+b_1-b_3}{2},b_3,\frac{b_1-b_3}{2}+c_1\bigr), \; T_3(a_1+b_1-b_3,b_3,c_1),$$ i.e. $$T_1(0,b_1-b_3,0), \; T_2\bigl(\frac{b_1-b_3}{2},0,\frac{b_1-b_3}{2}\bigr), \; T_3(b_1-b_3,0,0),$$ up to translation. It remains to conclude that $T_1T_2T_3$ is any translate of a triangle whose vertices are $$T_1(0,2\lambda,0), \; T_2\bigl(\lambda,0,\lambda\bigr), \; T_3(2\lambda,0,0),$$ where $\lambda>0$ (see Remark~\ref{r_lambdaP}). Looking carefully at Figure~\ref{s:skica_12_1}, we can notice that the triangle $ABC$ is also one of them for $\lambda=1$. Let $\varphi \bigl(\bigl\{\{1,2\},\{1\}\bigr\}\bigr)$ be $T_1T_2T_3=conv\{(0,2,0),(1,0,1),(2,0,0)\}$. One can verify that for the vertex $V$ of the hexagon $S$, which is common for the edges labelled by $\bigr \{ \{1,2\}\bigl\}$ and $\bigr \{ \{1\}\bigl\}$, $S+T_1T_2T_3 \simeq$ tr$_VS$ holds, indeed. Considering an appropriate local polytope, we define images of all non-singleton elements of the building set analogously: \[ \varphi\bigl(\bigl\{\{i_2,i_1\},\{i_2\}\bigr\}\bigr)=conv\{2e_{i_1},e_{i_2}+e_{i_3},2e_{i_2}\}, \] where $i_1,i_2$ and $i_3$ are mutually distinct elements of the set $[3]$. Together with already defined images of singleton elements of $\mathcal{B}_1$, we obtained the polytope $$ ^M PA_2=\Delta_{[3]}+\sum_{\beta\in \mathcal{B}_1} \varphi(\beta).$$ One may verify that $^M PA_2$ is a dodecagon whose vertices are all permutations of the coordinates of the points $(1,5,13)$ and $(2,3,14)$. This dodecagon can also be defined as the intersection of the hyperplane $x_1+x_2+x_3=19$ and the following set of facet-defining halfspaces: \[\left\{\begin{array}{lllll} \text{ } & \text{ }& x_{i_2}& \geqslant & 1 \\ x_{i_1} & + & x_{i_2}& \geqslant & 5 \\ x_{i_1} & + & 2x_{i_2}& \geqslant & 7 , \end{array}\right.\] where $i$ and $j$ are distinct elements of the set $[3]$. Therefore, two dodecagons are normally equivalent (see Figure~\ref{s:javaview}). We also verify that Definition~\ref{d_Mink_realizacija}(iii) is satisfied by analysing each partial sum that constitutes $ ^M PA_2$, step by step, for any order of summands. Finally, according to Definition~\ref{d_Mink_realizacija}, we conclude that $^M PA_2$ is a 2-dimensional Minkowski-realisation of the simplicial complex $C$. \section{the $n$-permutoassociahedron as a minkowski sum}\label{s_triD} In the previous section, we gave Minkowski-realisations for 2-permutoassocia-\linebreak hedron handling only with equations of hyperplanes, which define facets of the resulting polytope. We started from local polytopes that were chosen to define particular summands. All verifications were done manually or with a help of \texttt{polymake}. It was done with intention to postpone some definitions and claims about relation between Minkowski sum and fans refinement. However, these matters are necessary for Minkowski-realisation of $n$-permutoassociahedra. \begin{prop}\label{c_fanovi_sabiraka} \emph{(cf.\ {\cite[Proposition~7.12.\ and the definition at p.195]{Z95}})} The fan of the Minkowski sum of two polytopes is the common refinement of their individual fans, i.e.\ \[ \mathcal{N}(P_1+P_2)=\{ N_1 \cap N_2 \mid N_1 \in \mathcal{N}(P_1), N_2\in\mathcal{N}(P_2)\}. \] \end{prop} \begin{proof} [Proof of Proposition~\ref{p_linijaNE}] Let us suppose that such a line segment $L \subset\mathbf{R}^3$ exists for one pair of distinct elements $i_1,i_2 \in [3]$. By Proposition~\ref{c_fanovi_sabiraka} and Example~\ref{e_fanduzi}, the fan of the partial sum $S+L$ is the common refinement of the set $\{H,H^{\geqslant},H^{\leqslant}\}$, where $H$ is the plane normal to $L$. Since $S$ is a hexagon with three pairs of parallel sides, its fan is the set consisted of three planes with a common line and six dihedra determined by them. It is straightforward that every refinement of such a fan, which also refines the set $\{H,H^{\geqslant},H^{\leqslant}\}$, always leads either to the same fan or to the fan of an octagon with four pairs of parallel sides. Hence, $S+L$ is not a heptagon, i.e.\ $L$ is not a truncator summand for $S$, which contradicts Definition~\ref{d_Mink_realizacija}(iii). \end{proof} \begin{figure}[h!h!h!] \begin{center} \begin{tabular}{cccc} \begin{tikzpicture}[scale=0.7] \filldraw[fill=gray!40!white, draw=black!40!black,line width=.8pt] (-1,1) -- (0,-1) -- (-4,-1)-- (-3,1) -- cycle; \draw (-2,0) node {\scriptsize $ABCD$}; \draw (0.8,0) node {$+$}; \filldraw[fill=gray!20!white, draw=black!40!black,line width=.7pt] (-0.25,1.375) -- (-1,1) -- (-1,1.84) (-4.75,-0.625) -- (-4,-1) -- (-4,-1.84) (0,-1.84) -- (0,-1) -- (0.75,-0.625) (-3.75,1.375) -- (-3,1) -- (-3,1.84); \end{tikzpicture} & \begin{tikzpicture} [baseline, scale=0.55] \filldraw[fill=gray!40!white, draw=black!40!black,line width=.8pt] (0,4) -- (1,2) -- (-3,2) -- cycle; \draw (-0.7,2.75) node {\scriptsize $T_1T_2T_3$}; \filldraw[fill=gray!20!white, draw=black!40!black,line width=.7pt] (-3.5,2.75) -- (-3,2) -- (-3,1.16) (-0.5,4.75) -- (0,4) -- (0.75,4.325) (1,1.16) -- (1,2) -- (1.75,2.325); \end{tikzpicture} & \begin{tikzpicture} [scale=0.75] \filldraw[fill=gray!40!white, draw=black!40!black,line width=.8pt] (-1,1) -- (0,-1) -- (-4,-1) -- (-3.125,0.75) -- (-2.75,1) -- cycle; \draw (-2,0) node {\scriptsize $ABCD_2D_1$}; \draw (-4.5,0) node {$=$}; \filldraw[fill=gray!20!white, draw=black!40!black,line width=.7pt] (-0.25,1.375) -- (-1,1) -- (-1,1.84) (-4.75,-0.625) -- (-4,-1) -- (-4,-1.84) (0,-1.84) -- (0,-1) -- (0.75,-0.625) (-3.875,1.125) -- (-3.125,0.75) -- (-3.6,1.467) (-3.225,1.717) -- (-2.75,1) -- (-2.75,1.84); \end{tikzpicture} \\ \begin{tikzpicture} \begin{scope} [scale=1.8] \filldraw[fill=gray!20!white, draw=black!40!black,line width=.7pt] (-8.75,-2.625) -- (-8,-3) -- (-8,-2.16) (-8,-3.84) -- (-8,-3) -- (-7.25,-2.625) (-8.75,-2.625) -- (-8,-3) -- (-8,-3.84) (-7.25,-2.625) -- (-8,-3) -- (-8,-2.16); \end{scope} \end{tikzpicture} & \begin{tikzpicture} \begin{scope} [scale=1.8] \filldraw[fill=gray!20!white, draw=red,line width=.7pt] (-0.5,0.75) -- (0,0) -- (0,-0.84) (-0.5,0.75) -- (0,0) -- (0.75,0.325) (0,-0.84) -- (0,0) -- (0.75,0.325); \end{scope} \end{tikzpicture} & \begin{tikzpicture} \begin{scope} [scale=1.8] \filldraw[fill=gray!20!white, draw=black!40!black,line width=.7pt] (0.75,0.375) -- (0,0) -- (0,0.84) (-0.75,0.375) -- (0,0) -- (0,-0.84) (0,-0.84) -- (0,0) -- (0.75,0.375) (-0.75,0.375) -- (0,0) -- (-0.575,0.717) (-0.575,0.717) -- (0,0) -- (0,0.84); \draw[red, line width=.8pt] (0,0) -- (-0.575,0.717); \draw[red, line width=.5pt] (0,0) -- (0.75,0.375); \draw[red, line width=.5pt] (0,0) -- (0,-0.84); \end{scope} \end{tikzpicture} \end{tabular} \end{center} \caption{Normal cones and fans} \label{s:skica_fanovi} \end{figure} The previous figure illustrates the common refinement of the individual fans of the trapezoid $ABCD$ and the triangle $T_1T_2T_3$ (see Section 4). The fan of the resulting pentagon is a refinement of the trapezoid's fan by the ray contained in its normal cone at the vertex $D$. Let us remember that $T_1T_2T_3$ was defined as a translate of the triangle $ABC$, and therefore, its sum with the trapezoid has an edge parallel to $AC$. The text below determines a relationship between two polytopes $P_1$ and $P_2$ whose sum is normally equivalent to tr$_F P_1$. In other words, we search for a polytope $P_2$ which is a truncator summand for a given polytope $P_1$. From \cite{P09}, if $P_1$ is a simplex, then $P_2$ is the convex hull of those vertices of $P_1$ that do not belong to $F$. The following proposition shows that the same holds for every simple polytope $P_1$ when $F$ is a vertex. But, if $\dim(F)>0$, then tr$_F P_1 \sim P_1+P_2$ usually fails. One can find a lot of examples. \begin{prop}\label{p_trunkacijatemenaprostog} Let $P_1\in \mathcal{M}_n$ be an $n$-polytope whose vertex $v$ is contained in exactly $n$ facet. If $P_2=conv(\mathcal{V}(P_1)-\{v\})$, then there is a truncation $\emph{tr}_v P_1$ such that \[P_1+P_2\simeq \emph{tr}_vP_1.\] \end{prop} \begin{proof} Let $V=\{v_1,\ldots,v_k\}$ be the set $\mathcal{V}(P_1)-\{v\}$. Since $v$ is contained in exactly $n$ facets, there are exactly $n$ vertices adjacent to $v$ in $P_1$. Let us suppose that $v$ and $v_i$ are adjacent in $P_1$ if and only if $i\in[n]$. Then, for every $i\in[n]$, let $w_i$ be the midpoint of the edge $\overline{v_iv}$. Since there exists the hyperplane which contains $w_i$ for every $i\in[n]$, a polytope $conv(\{w_1,\ldots,w_n\} \cup V)$ is a truncation of $P_1$ in $v$, which we denote by tr$_vP_1$. Hence, \[ \text{tr}_vP_1=conv(\{\dfrac{v+v_1}{2},\ldots,\dfrac{v+v_n}{2}\} \cup V ). \] At the other side, from Proposition~\ref{prop_minkowski_svojstva1} and the distributivity low, we have the following equations: \[\begin{array}{rl} P_1+P_2= & conv(\{v\} \cup V)+conv V=conv\bigl((\{v\}\cup V)+V\bigr) \\[0.7ex] = & conv\bigl((\{v\}+V)\cup(V+V)\bigr)=conv\bigl((\{v\}+V)\cup 2V\bigr). \end{array}\] The last equation is obtained from the fact that the sum of two different points $v_i$ and $v_j$ is the midpoint of the line segment whose endpoints are $2v_i$ and $2v_j$. By Corollary~\ref{prop_minkowski_svojstva2}(i), we may suppose that $v=0\in \mathbf{R}^n$ without lose of generality. It implies that $$P_1+P_2=conv\bigl(V\cup2V\bigr) \; \;\text{and} \; \; \text{tr}_vP_1=conv(\{\dfrac{v_1}{2},\ldots,\dfrac{v_n}{2}\} \cup V).$$ Therefore, $$2\text{tr}_vP_1=conv(\{v_1,\ldots,v_n\} \cup 2V)\subseteq conv(V \cup 2V)=P_1+P_2.$$ For every $j\in[k]-[n]$, we consider the line segment $L_j=\overline{v_jv}$. Since $v$ and $v_j$ are vertices of the polytope $P_1$, this line segment intersects the truncation hyperplane in the point $w$ which belongs to $conv\{w_1,\ldots,w_n\}$. Hence, $$w=\sum\limits_{i=1}^{n} \alpha_iw_i=\sum\limits_{i=1}^{n} \alpha_i\dfrac{v+v_i}{2}=\sum\limits_{i=1}^{n} \alpha_i\dfrac{v_i}{2},$$ where $\sum\limits_{i=1}^{n} \alpha_i=1$ and $0\leqslant\alpha_i<1$ for every $i\in[n]$. If we suppose that the midpoint of $L_j$ belongs to the line segment $\overline{vw}$, then, since every $w_i$, $i \in [n]$, is the midpoint of the edge adjacent to $v$, we have that $v_j \in 2\overline{wv_j}$. This further implies that $v_j \in conv\{v,v_1,\ldots,v_n\}$, and thus, can not be a vertex of $P_1$. Therefore, the midpoint of $L_j$ belongs to the line segment $\overline{wv_j}$, i.e.\ $\dfrac{v_j}{2}\in conv\{w,v_j\}$. It means that there exist $0<\lambda_1,\lambda_2<1$ such that $\lambda_1+\lambda_2=1$ and \[ \dfrac{v_j}{2}=\lambda_1w+\lambda_2v_j=\lambda_1\sum\limits_{i=1}^{n} \alpha_i\dfrac{v_i}{2}+\lambda_2v_j ,\] which entails that \[ v_j=\sum\limits_{i=1}^{n} \lambda_1\alpha_iv_i+\lambda_22v_j .\] Since, $0<\lambda_1\alpha_i<1$ for every $i\in[n]$, and $$\lambda_2+\sum\limits_{i=1}^{n} \lambda_1\alpha_i=\lambda_2+\lambda_1\sum\limits_{i=1}^{n} \alpha_i=\lambda_2+\lambda_1=1,$$ we conclude that $v_j \in conv(\{v_1,\ldots,v_n\} \cup 2V)$, which implies $$conv(V \cup 2V)\subseteq conv(\{v_1,\ldots,v_n\} \cup 2V).$$ Hence, $P_1+P_2=2\text{tr}_vP_1$. It remains to apply Remark~\ref{r_lambdaP}. \end{proof} \begin{prop}\label{p_dovoljnomakskonuseposmatrati} For $P,P_1,P_2 \in \mathcal{M}_n$, $P_1+P_2 \simeq P$ holds if and only if the following two conditions are satisfied. \begin{enumerate} \item[\emph{(i)}] Every maximal normal cone in $\mathcal{N}(P)$ is the intersection of two maximal normal cones in $\mathcal{N}(P_1)$ and $\mathcal{N}(P_2)$. \item[\emph{(ii)}] If the intersection of two maximal normal cones in $\mathcal{N}(P_1)$ and $\mathcal{N}(P_2)$ is an $n$-cone, then it is a maximal normal cone in $\mathcal{N}(P)$. \end{enumerate} \end{prop} \begin{proof} Suppose that the sets of maximal normal cones in the fans of two arbitrary polytopes are equal. Each normal cone in one of the fans is a face of some maximal normal cone in that fan. Then, by assumption, it is also a face of same maximal normal cone in the other fan. Hence, by Definition~\ref{d_normalconefan}, that cone is contained in both fans. This, together with Definition~\ref{d_norm_ekvivalentni politopi} and Definition~\ref{d_normalconefan}, implies that two polytopes are normally equivalent if and only if the sets of maximal normal cones in their fans are equal. It remains to apply Proposition~\ref{c_fanovi_sabiraka}. \end{proof} According to Remark~\ref{r_correspondingfacettruncation}, let $P_1\in \mathcal{M}_n$ be $d$-polytope defined as the intersection of the following $m$ facet-defining halfspaces \[ \alpha_i^{\geqslant}: \langle a_i,x\rangle\geqslant b_i, \; \; 1\leqslant i\leqslant m, \] and let tr$_FP_1$ be a parallel truncation of $P_1$ in its face $F$ defined as the intersection of the following $m+1$ facet-defining halfspaces \[\alpha_i^{\geqslant}: \langle a_i,x\rangle\geqslant b_i, \; \; 0\leqslant i\leqslant m.\] \begin{dfn}\label{d_pideformacija} Let $P_1$ and \emph{tr}$_FP_1$ be two polytopes defined as above. A polytope $P_2\in \mathcal{M}_n$ is an $F$-\emph{deformation}\footnote{This definition is inspired by {\cite[Definition~15.01]{PRW08},} which defines several types of deformation cones of a given polytope.} of $P_1$ when the following conditions are satisfied: \begin{enumerate} \item [\emph{(i)}] $P_2$ is the intersection of the halfspaces $$\pi_i^{\geqslant}: \langle a_i,x\rangle\geqslant c_i, \; 0\leqslant i\leqslant m, \text{ such that } P_2 \cap \pi_0\simeq P_1\cap \alpha_0;$$ \item [\emph{(ii)}]for every $S\subset\{0,\ldots,m\}$ \[\bigcap\{\alpha_i\mid i \in S\} \text{ is a vertex of } \emph{tr}_FP_1 \Rightarrow\bigcap\{\pi_i\mid i \in S\} \text{ is a vertex of } P_2.\] \end{enumerate} \end{dfn} \begin{rem}\label{r_deformacijagruboreceno} The condition \emph{(ii)} together with the first part of condition \emph{(i)} means that $P_2$ can be obtained from tr$_FP_1$ by parallel translations of the facets without, roughly speaking, crossing over the vertices \footnote{``...by moving the vertices such that directions of all edges are preserved (and some edges may accidentally degenerate into a single point).''\cite{P09}}. If $f$ is the facet of \emph{tr}$_FP_1$ contained in the truncation hyperplane, then the second part of the condition \emph{(i)} implies that $d-1\leqslant \dim(P_2)\leqslant d$, and that $\pi_0$ is a supporting hyperplane for $P_2$ defining a $(d-1)$-face normally equivalent to $f$. If $\dim(P_2)=d-1$, that face is $P_2$ itself. \end{rem} \begin{rem}\label{r_korespondingdeformacijateme} We say that a vertex $v$ of an $F$-deformation of $P_1$ corresponds to some vertex $u$ of $\emph{tr}_FP_1$ if $v$ corresponds to $u$ according to \emph{Definition~\ref{d_pideformacija}(ii)}. \end{rem} \begin{exm}\label{e_trunkacijajedeformacija} Every parallel truncation \emph{tr}$_FP$ of an arbitrary polytope $P$ is an $F$-deforma\-tion of $P$. \end{exm} \begin{exm} The triangle $ABC$ is a $D$-deformation of the trapezoid $ABCD$ illustrated in Figure~\ref{s:skica_12}. Figures~\ref{s:skica_a4}, \ref{s:skica_a2} and \ref{s:skica_a3} depict some 3-nestohedra and their deformations. \end{exm} \begin{lem}\label{l_zvezda} Let $P_1$ and \emph{tr}$_FP_1$ be two polytopes defined as above. If $P_2$ is an $F$-deformation of $P_1$, then the following claims hold. \begin{enumerate} \item[\emph{(i)}] For $v$ being a vertex of $\emph{tr}_FP_1$ and $u_2$ being its corresponding vertex of $P_2$, we have that $N_v(\emph{tr}_FP_1)\subseteq N_{u_2}(P_2)$. \item[\emph{(ii)}] Every maximal normal cone in $\mathcal{N}(P_2)$ is the union of some maximal normal cones in $\mathcal{N}(\emph{tr}_FP_1)$. \item[\emph{(iii)}] Every maximal normal cone in $\mathcal{N}(P_2)$ contains no more than one maximal normal cone to $\emph{tr}_FP_1$ at some vertex contained in the truncation hyperplane. \end{enumerate} \end{lem} \begin{proof} Without lose of generality, suppose that $P_1$ is full dimensional. \noindent (i): Let $v$ be contained in the facets defined by the halfspaces $\{\alpha_i^{\geqslant} \mid i\in S\}$. Then, the set of rays $\{-a_i \mid i \in S\}$ spans $N_v(\text{tr}_FP_1)$, and $u_2$ is the intersection of the hyperplanes $\pi_i$, $i \in S$. Therefore, for every $i \in S$ the functional $-a_i$ attains the maximum value $-c_i$ at $u_2$ over all points in $P_2$, which implies that $-a_i$ is in the cone $N_{u_2}(P_2)$. Since all these rays are spanning rays of $N_v(\text{tr}_FP_1)$, the claim holds. \noindent(ii): By the previous claim, for every maximal normal cone $N\in \mathcal{N}(\text{tr}_FP_1)$ there is a maximal normal cone in $\mathcal{N}(P_2)$ in which $N$ is contained. Since $\mathcal{N}(\text{tr}_FP_1)$ and $\mathcal{N}(P_2)$ are complete, the claim holds. \noindent(iii): By Definition~\ref{d_pideformacija}(i), $P_2 \cap \pi_0$ is an $(n-1)$-face of $P_2$, and hance, the ray $-a_0$ is a spanning ray just of those normal cones to $P_2$ that correspond to the vertices of that face. By the claim (ii), each of them contains at least one normal cone to tr$_FP_1$ at some vertex contained in $\alpha_0$. Then, since $P_2 \cap \pi_0 \sim \text{tr}_FP_1 \cap \alpha_0$, the claim follows directly from the equation $\lvert\mathcal{V}(P_2 \cap \pi_0)\rvert=\lvert\mathcal{V}(\text{tr}_FP_1 \cap \alpha_0)\rvert$. \end{proof} \begin{prop}\label{l_deformacijajesingleprofinjenje} Let $P_1$ and \emph{tr}$_FP_1$ be two polytopes defined as above. If $P_2$ is an $F$-deformation of $P_1$, then $$P_1+P_2 \simeq \emph{tr}_FP_1.$$ \end{prop} \begin{proof} Without lose of generality, we suppose that $P_1$ is full dimensional and show the claim according to Proposition~\ref{p_dovoljnomakskonuseposmatrati}. Let $N_v$ be a normal cone to $\text{tr}_FP_1$ at a vertex $v$. The goal is to find two maximal normal cones $N_1 \in \mathcal{N}(P_1)$ and $N_2 \in \mathcal{N}(P_2)$ such that $N_v=N_1 \cap N_2$. Let $u_2$ be a vertex of $P_2$ which corresponds to $v$ according to Remark~\ref{r_korespondingdeformacijateme}. Lemma~\ref{l_zvezda}(i) guarantees that $N_v \subseteq N_{u_2}(P_2)$. If $-a_0$ is not a spanning ray of $N_v$, then $v$ is also a vertex of $P_1$, i.e.\ $N_v=N_v(P_1)$. Then, $N_v$ is the intersection of the maximal normal cones $N_v(P_1)$ and $N_{u_2}(P_2)$. Otherwise, i.e.\ if $v$ belongs to the truncation hyperplane $\alpha_0$, then there is an edge $E$ of $P_1$ which has a common vertex with $F$ intersecting $\alpha_0$ in $v$ . Let $u_1$ be that vertex, i.e.\ $u_1=E\cap F$. By Lemma~\ref{l_konusiparalelnetrunkacije}, $N_v\subseteq N_{u_1}(P_1)$, and hence, $N_v \subseteq N_{u_1}(P_1)\cap N_{u_2}(P_2)$. Now, there are two possible cases. If $N_{u_2}(P_2)=N_v$, then $N_v$ is the intersection of $N_{u_1}(P_1)$ and $N_{u_2}(P_2)$. Otherwise, by Lemma~\ref{l_zvezda}(ii) and (iii), $N_{u_2}(P_2)=N_v \cup N$, where $N$ is the union of some maximal normal cones in $\mathcal{N}(\text{tr}_FP_1)$ such that each of them corresponds to some vertex not contained in the truncation hyperplane, i.e.\ to some vertex of $P_1$ not contained in $F$. If we suppose that $N_v \subset N_{u_1}(P_1)\cap N_{u_2}(P_2)$, then there is a maximal normal cone in $\mathcal{N}(P_1)$ which is contained in $N$ and whose intersection with $ N_{u_1}$ is an $n$-cone. This is contradiction since they are maximal normal cones in the same fan (see Remark~\ref{r_preskmaksimalnihnijemaksimalan}), and hence, $N_v=N_{u_1}(P_1)\cap N_{u_2}(P_2)$. We conclude that the first condition of Proposition~\ref{p_dovoljnomakskonuseposmatrati} is satisfied. Let $N_{u_1}$ and $N_{u_2}$ be two normal cones to $P_1$ and $P_2$ at a vertex $u_1$ and $u_2$, respectively. The goal is to show that if their intersection is a maximal cone, then it is a maximal cone in $\mathcal{N}(\text{tr}_FP_1)$. If $u_1\notin F$, then $N_{u_1} =N_{u_1}(\text{tr}_FP_1)$. By Lemma~\ref{l_zvezda}(i), $N_{u_2}$ is the union of some maximal cones in $\mathcal{N}(\text{tr}_FP_1)$, and thus, the intersection of $N_{u_1}$ and $N_{u_2}$ is a maximal cone if and only if $N_{u_1}\subseteq N_{u_2}$. In that case, their intersection is exactly $N_{u_1}$, a maximal cone in $\mathcal{N}(\text{tr}_FP_1)$. Now, let $u_1$ be a vertex contained in $F$. By Lemma~\ref{l_zvezda}(iii), we have two possible cases for $N_{u_2}$. If all of the maximal normal cones that are contained in $N_{u_2}$ correspond to vertices of $\text{tr}_FP_1$ not contained in the truncation hyperplane, then all of them are maximal normal cones to $P_1$ at vertices that do not belong to $F$. Therefore, according to Remark~\ref{r_preskmaksimalnihnijemaksimalan}, the intersection of $N_{u_1}$ and the union of such cones is not an $n$-cone. Otherwise, there is exactly one vertex $v$ of tr$_FP_1$ contained in the truncation hyperplane, such that $N_v(\text{tr}_FP_1)\subseteq N_{u_2}$. According to Lemma~\ref{l_konusiparalelnetrunkacije} and Remark~\ref{r_preskmaksimalnihnijemaksimalan}, the intersection of $N_{u_1}$ and $N_{u_2}$ is an $n$-cone if and only if there is an edge of $P_1$ containing both $u_1$ and $v$. When it is a case, their intersection is exactly $N_v(\text{tr}_FP_1)$. We conclude that the second condition of Proposition~\ref{p_dovoljnomakskonuseposmatrati} is also satisfied. \end{proof} If $P_1$ is simple polytope with a vertex $v$, then $conv(\mathcal{V}(P_1)-\{v\})$ is a $v$-de\-for\-mation of $P_1$. It means that Proposition~\ref{p_trunkacijatemenaprostog} is just a special case of the previous one. However, the methods used in theirs proofs are essentially different (note that Proposition~\ref{c_fanovi_sabiraka} is not even used in the proof of Proposition~\ref{p_trunkacijatemenaprostog}). Now, in order to answer Question~\ref{q_pitanje}, we present a polytope $PA_{n,1}$, and furthermore, a family of $n$-polytopes $PA_{n,c}$, where $c\in (0,1]$. Let $\{\mathcal{A}_1,\mathcal{A}_2\}$ be a partition of $\mathcal{B}_1$ such that the block $\mathcal{A}_1$ is the collection of all the singletons, i.e. \[ \mathcal{A}_1=\bigl\{\{\{i_{1+l}, \ldots, i_1\}\} \mid 0\leqslant l \leqslant n-1\bigr\} \text{ and } \mathcal{A}_2=\mathcal{B}_1-\mathcal{A}_1, \] where $i_1,\ldots,i_n$ are mutually distinct elements of $[n+1]$. For the sequel, let \[ \beta=\bigl\{\{i_{k+l},\ldots,i_k,\ldots,i_1\},\ldots,\{i_{k+l},\ldots,i_k,i_{k-1}\},\{i_{k+l},\ldots,i_k\} \bigr\}, \] be an element of $\mathcal{A}_2$, where $1 < k\leqslant k+l\leqslant n$. Let $$\beta_{min}=\{i_{k+l},\ldots,i_k\}, \quad \beta_{max}=\{i_{k+l},\ldots,i_k,\ldots,i_1\} \quad \text{and}$$ \[ \mathcal{B}_\beta= \bigl\{B \subseteq [n+1]\mid B \in \beta \text{ or } B\subset \beta_{min} \text{ or } \beta_{max}\subset B \bigl\} \cup \bigl\{\{v\} \mid v \in [n+1]\bigr\}. \] \begin{lem} \label{l_bildingzanest} The set $\mathcal{B}_\beta-\{[n+1]\} $ is a building set of $\mathcal{P}([n+1])$. \end{lem} \begin{proof} Let $B_1$ and $B_2$ be two distinct elements of $\mathcal{B}_\beta-\{[n+1]\}$ such that $B_1\cap B_2 \neq \emptyset$. Hence, they are not singletons. If they are comparable, then their union belongs to $\mathcal{B}_\beta -\{[n+1]\}$. Otherwise, since $\beta_{min}\subset \beta_{max} $, we have that $B_1,B_2\supset \beta_{max}$ or $B_1,B_2\subset \beta_{min}$. It follows that $\beta_{max} \subseteq B_1\cap B_2 \subset B_1\cup B_2$ or $\beta_{min} \supseteq B_1\cup B_2$. Hence, $\mathcal{B}_\beta-\{[n+1]\}$ is a building set of $\mathcal{P}([n+1])$ according to Definition~\ref{d_bilding_skup}. \end{proof} By the previous lemma and Definition~\ref{d_bilding_skup}, $\mathcal{B}_\beta-\{[n+1]\} $ is a building set of the simplicial complex $C_0$. The family of all nested sets with respect to this building set forms a simplicial complex, which we denote by $C_2$. \begin{prop} \label{p_nestoedarglavni} The nestohedron $P_{\mathcal{B}_\beta }$ is an $n$-dimensional Minkowski-realisation of $C_2$. \end{prop} \begin{proof} It follows directly from Lemma~\ref{l_bildingzanest}, Remark~\ref{r_postnikov_i_nas_nested}, Proposition~\ref{p_relaizacijaCo} and Postnikov's Minkowski-realisation of nestohedra given in the end of Section 3. \end{proof} Therefore, the facets of $P_{\mathcal{B}_\beta}$ can be properly labelled according to Definition~\ref{d_proplabel}. For an element $A \in \mathcal{B}_\beta-\{[n+1]\}$, let $f_A$ be the facet labelled by $A$. By the definition of $\mathcal{B}_\beta$, we have that $\beta \subseteq \mathcal{B}_\beta-\{[n+1]\} $, and hence, let \[ F_\beta= \bigcap\limits_{B \in \beta} f_B . \] Since the elements of $\beta$ are mutually comparable, $F_\beta$ is a proper face of $P_{\mathcal{B}_\beta}$ (see (cf.\ {\cite[Theorem~1.5.14]{BP15}})), and since $\beta$ is not a singleton, $F_\beta$ is not a facet. Let $ \mathcal{B}_{\beta \vert A} $ denote $\{ B \in \mathcal{B}_\beta \mid B\subseteq A\}$. \begin{prop} \label{p_burstabernejednakosti} We have \[ P_{\mathcal{B}_\beta}=\bigl \{ x \in \mathbf{R}^{n+1} \mid \sum\limits_{i=1}^{n+1}x_i=\lvert \mathcal{B}_\beta \rvert , \; \sum\limits_{i \in A}^{}x_i\geqslant \lvert \mathcal{B}_{\beta \vert A} \rvert \emph{\text{ for every }} A\in \mathcal{B}_\beta \bigr \}. \] \\Moreover, every hyperplane $H_A=\bigl \{ x \in \mathbf{R}^{n+1} \mid \sum\limits_{i \in A}^{}x_i=\lvert \mathcal{B}_{\beta \vert A} \rvert \; \bigr \}$ with $A \neq [n+1]$ defines the facet $f_A$ of $P_{\mathcal{B}_\beta}$. \end{prop} \begin{proof} It follows directly from Proposition~1.5.11. in \cite{BP15} and Proposition~\ref{p_nestoedarglavni}. \end{proof} Now, let $N_\beta$ be a polytope obtained from $P_{\mathcal{B}_\beta}$ by removing the face $F_\beta$, i.e. \[ N_\beta=conv\bigl( \; \mathcal{V}(P_{\mathcal{B}_\beta})-\mathcal{V}(F_\beta) \; \bigr). \] Let $\kappa_\beta:\mathbf{R}^{n+1}\rightarrow \mathbf{R}$ be a function such that \[ \kappa_\beta(x)=\sum\limits_{B \in \beta}^{} \sum\limits_{i \in B} x_i=x_{i_1}+2x_{i_2}+\ldots+k(x_{i_k}+\ldots+x_{i_{k+l}}), \] where $x=(x_1,\ldots,x_{n+1})$, and let $ m_\beta= \min\limits_{v\in \mathcal{V}(P_{\mathcal{B}_\beta})}^{} \kappa_\beta(v)$. \begin{prop}\label{p_prekominimumaFbeta} The following holds: \[ F_\beta=conv\{v\in \mathcal{V}(P_{\mathcal{B}_\beta}) \mid \kappa_\beta(v)= m_\beta\}. \] \end{prop} \begin{proof} Let $v=(v_1,\ldots,v_{n+1})$ be a vertex of the nestohedron $\mathcal{P_{\mathcal{B}_\beta}}$. Since $\beta\subseteq \mathcal{B}_\beta$, from Proposition~\ref{p_burstabernejednakosti}, we have that \[ \kappa_\beta(v)=\sum\limits_{B \in \beta}^{} \sum\limits_{i \in B} v_i \geqslant \sum\limits_{B \in \beta}^{} \lvert \mathcal{B}_{\beta \vert B} \rvert , \] which implies $m_\beta=\sum\limits_{B \in \beta}^{} \lvert \mathcal{B}_{\beta \vert B} \rvert $. Therefore, $\kappa_\beta(v)=m_\beta$ if and only if for every $B \in \beta$, the vertex $v$ lies in the hyperplane $H_B$. Since, $H_B$ defines the facet $f_B$, $\kappa_\beta(v)=m_\beta$ if and only if $v \in \bigcap\limits_{B \in \beta} f_B$. \end{proof} \begin{cor}\label{c_prekominimumaNbeta} The following holds: \[ N_\beta=conv\{v\in \mathcal{V}(P_{\mathcal{B}_\beta}) \mid \kappa_\beta(v) > m_\beta\}. \] \end{cor} The previous claim offers a comfortable way to obtain the polytope $N_\beta$ from the nestohedron $P_{\mathcal{B}_\beta}$. Now, one is able to handle only with vertices and their coordinates instead of facets and their labels, which is algorithmically closer to Minkowski sums and essentially beneficial with regards to computational aspect. \begin{exm} If $n=2$ and $\beta=\bigl\{\{1,2\},\{1\}\bigr\}$, then $\mathcal{B}_\beta=\beta \cup \bigl\{\{2\},\{3\},[3]\bigr\}$ and $P_{\mathcal{B}_\beta}$ is the trapezoid $ABCD$ given in Figure~\ref{s:skica_12}. Since $m_\beta=4$, $F_\beta$ and $N_\beta$ are the vertex $D$ and the triangle $ABC$, respectively. \end{exm} \begin{exm} Let $n=3$. If $\beta=\bigl\{\{1,2,4\},\{1,2\},\{1\} \bigr\}$, then $\mathcal{B}_\beta=\beta \cup \bigl \{\{2\},\{3\},\{4\},[4] \bigr\}$ and \linebreak$P_{\mathcal{B}_\beta}=\Delta_{[4]}+\Delta_{\{1\}} +\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{4\}}+\Delta_{\{1,2,4\}}+\Delta_{\{1,2\}}$, the nestohedron $ABCDEFGH$ illustrated in Figure~\ref{s:skica_a4} left. Here, $m_\beta=9$, and hence, $F_\beta$ is the vertex $D$, while $N_\beta$ is the convex hull of the remaining vertices (see Figure~\ref{s:skica_a4} right). Figure~\ref{s:skica_a4rez} depicts the sum $P_{\mathcal{B}_\beta}+N_\beta$, which is normally equivalent to the polytope obtained from $P_{\mathcal{B}_\beta}$ by truncation in the vertex $D$. \begin{figure}[h!h!] \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[scale=1.3] \draw (0,0) node[above, xshift=-0.2cm] {\scriptsize $H(1,2,1,3)$}-- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $G(2,1,1,3)$}; \draw(1.4,0)-- (1.4,-1.4) node[below] {\scriptsize $C(2,1,2,2)$}; \draw (1.4,-1.4)-- (0,-1.4)node[below] {\scriptsize $D(1,2,2,2)$} -- (0,0); \filldraw[red] (0,-1.4) circle (1pt);; \draw (1.4,0) -- (2.4,-2) node[below, xshift=0.3cm] {\scriptsize $F(4,1,1,1)$}; \draw (0,0) -- (-1,-2) node[below, xshift=-0.3cm] {\scriptsize $E(1,4,1,1)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (0,-1.4) -- (-0.7,-2.6)node[below] {\scriptsize $A(1,3,2,1)$} -- (2.1,-2.6) node[below] {\scriptsize $B(3,1,2,1)$}-- (1.4,-1.4); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \draw (3.1,-1) node {$+$}; \end{tikzpicture} & \hspace{-0.7cm} \begin{tikzpicture}[scale=1.3] \draw (0,0) node[above, xshift=-0.2cm] {\scriptsize $T_2(1,2,1,3)$}-- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $T_7(2,1,1,3)$}; \draw (1.4,0)-- (1.4,-1.4) node[above, xshift=-0.85cm, yshift=-0.25cm] {\scriptsize $T_3(2,1,2,2)$}; \draw (1.4,-1.4) -- (-0.7,-2.6)node[below] {\scriptsize $T_1(1,3,2,1)$}--(0,0); \draw (0,0) -- (1.4,-1.4); \draw (1.4,0) -- (2.4,-2) node[above,xshift=-0.1cm ] {\scriptsize $T_5(4,1,1,1)$}; \draw (0,0) -- (-1,-2) node[above, xshift=-0.2cm] {\scriptsize $T_6(1,4,1,1)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (-0.7,-2.6) -- (2.1,-2.6) node[below] {\scriptsize $T_4(3,1,2,1)$}-- (1.4,-1.4); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \end{tikzpicture} \end{tabular} \end{center} \caption{The polytopes $P_{\mathcal{B}_\beta}$ and $N_\beta$ for $\beta=\bigl\{\{1,2,4\},\{1,2\},\{1\}\bigr\}$} \label{s:skica_a4} \end{figure} \vspace{-3mm} \begin{figure} [h!h!] \begin{center} \begin{tikzpicture}[scale=1.4] \draw(0,0) node[above, xshift=-0.2cm] {\scriptsize $H_2(2,4,2,6)$}-- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $G_7(4,2,2,6)$}; \draw (1.4,0)-- (1.4,-1.4) node[above, xshift=0.85cm, yshift=-0.25cm] {\scriptsize $C_3(4,2,4,4)$}; \draw (1.4,-1.4)-- (0.7,-1.4) node[above, xshift=0.15cm, yshift=-0.1cm] {\scriptsize $D_3(3,3,4,4)$}; \draw (0.7,-1.4)-- (-0.35,-1.9) node[above, xshift=-0.85cm, yshift=-0.1cm] {\scriptsize $D_1(2,5,4,3)$}--(0,-0.7) node[above, yshift=-0.1cm, xshift=0.8cm] {\scriptsize $D_2(2,4,3,5)$}--(0,0); \draw (0,-0.7) -- (0.7,-1.4); \draw (1.4,0) -- (2.4,-2) node[below, xshift=0.7cm] {\scriptsize $F_5(8,2,2,2)$}; \draw (0,0) -- (-1,-2) node[below, xshift=-0.7cm] {\scriptsize $E_6(2,8,2,2)$}; \draw [dashed](-1,-2) -- (2.4,-2); \draw (-0.35,-1.9) -- (-0.7,-2.6)node[below] {\scriptsize $A_1(2,6,4,2)$} -- (2.1,-2.6) node[below] {\scriptsize $B_4(6,2,4,2)$}-- (1.4,-1.4); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \end{tikzpicture} \end{center} \caption{The sum $P_{\mathcal{B}_\beta}+N_\beta$ for $\beta=\bigl\{\{1,2,4\},\{1,2\},\{1\}\bigr\}$} \label{s:skica_a4rez} \end{figure} If $\beta=\bigl\{\{1,2,4\},\{1,2\} \bigr\}$, then $\mathcal{B}_\beta$ and $P_{\mathcal{B}_\beta}$ are the same as in the previous case, while $m_\beta=8$. This minimum is achieved at the points $C$ and $D$ and therefore, $F_\beta$ is the edge $CD$. Or equivalently, $F_\beta$ is the intersection of the facets labelled by $\{1,2\}$ and $\{1,2,4\}$, i.e.\ the quadrate $CDGH$ and the trapezoid $ABCD$. However, $N_\beta$ is the convex hull of the remaining points (see Figure~\ref{s:skica_a2} right). Note that the partial sum $P_{\mathcal{B}_\beta}+N_\beta$, depicted in Figure~\ref{s:skica_rez} left, is normally equivalent to the polytope obtained from $P_{\mathcal{B}_\beta}$ by truncation in the edge $CD$. If $\beta=\bigl\{\{1,2\},\{1\} \bigr\}$, then $\mathcal{B}_\beta=\beta \cup \bigl\{\{2\},\{3\},\{4\},\{1,2,3\},\{1,2,4\},[4] \bigr\}$ and $P_{\mathcal{B}_\beta}=\Delta_{[4]}+\Delta_{\{1\}} +\Delta_{\{2\}}+\Delta_{\{3\}}+\Delta_{\{4\}}+\Delta_{\{1,2,3\}}+\Delta_{\{1,2,4\}}+\Delta_{\{1,2\}}$, the nestohedron $ABCDEFGHIJ$ illustrated in Figure~\ref{s:skica_a3} left. It implies that $m_\beta=4$ and $F_\beta$ is the edge $DJ$. Or equivalently, $F_\beta$ is the intersection of the facets labelled by $\{1,2\}$ and $\{1\}$, i.e.\ the pentagon $AEDJH$ and the quadrate $CDIJ$. However, $N_\beta$ is the convex hull of the remaining points depicted in Figure~\ref{s:skica_a3} right. Notice that, in this case, $N_\beta$ is not simple. Figure~\ref{s:skica_rez} right illustrates the sum $P_{\mathcal{B}_\beta}+N_\beta$, which is normally equivalent to the polytope obtained from $P_{\mathcal{B}_\beta}$ by truncation in the edge~$DJ$. \begin{figure}[h!h!] \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[scale=1.3] \draw (0,0) node[above, xshift=-0.2cm] {\scriptsize $H(1,2,1,3)$}-- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $G(2,1,1,3)$}; \draw(1.4,0)-- (1.4,-1.4) node[below] {\scriptsize $C(2,1,2,2)$}; \draw[red] (1.4,-1.4)-- (0,-1.4); \draw (0,-1.4)node[below] {\scriptsize $D(1,2,2,2)$} -- (0,0); \filldraw[red] (0,-1.4) circle (1pt); \filldraw[red] (1.4,-1.4) circle (1pt); \draw (1.4,0) -- (2.4,-2) node[below, xshift=0.3cm] {\scriptsize $F(4,1,1,1)$}; \draw (0,0) -- (-1,-2) node[below, xshift=-0.3cm] {\scriptsize $E(1,4,1,1)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (0,-1.4) -- (-0.7,-2.6)node[below] {\scriptsize $A(1,3,2,1)$} -- (2.1,-2.6) node[below] {\scriptsize $B(3,1,2,1)$}-- (1.4,-1.4); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \draw (3.1,-1) node {$+$}; \end{tikzpicture}& \hspace{-0.7cm} \begin{tikzpicture}[scale=1.3] \draw(0,0) node[above, xshift=-0.2cm] {\scriptsize $T_1(1,2,1,3)$}-- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $T_3(2,1,1,3)$}; \draw (-0.7,-2.6)node[below] {\scriptsize $T_2(1,3,2,1)$}--(0,0); \draw (1.4,0) -- (2.4,-2) node[above,xshift=-0.2cm ] {\scriptsize $T_5(4,1,1,1)$}; \draw (0,0) -- (-1,-2) node[above, xshift=-0.2cm] {\scriptsize $T_6(1,4,1,1)$}; \draw [dashed](-1,-2) -- (2.4,-2); \draw (-0.7,-2.6) -- (2.1,-2.6) node[below] {\scriptsize $T_4(3,1,2,1)$}-- (1.4,0); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \end{tikzpicture} \end{tabular} \end{center} \caption{The polytopes $P_{\mathcal{B}_\beta}$ and $N_\beta$ for $\beta=\bigl\{\{1,2,4\},\{1,2\}\bigr\}$} \label{s:skica_a2} \end{figure} \begin{figure}[h!h!] \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[scale=1.4] \hspace{-0.15cm} \draw (-0.2,-0.5) node[above, xshift=-0.2cm] {\scriptsize $H(1,3,1,3)$}-- (1.6,-0.5) node[above, xshift=0.2cm] {\scriptsize $G(3,1,1,3)$}--(1.1,-1.2) node[above, xshift=0.8cm, yshift=-0.3cm] {\scriptsize $I(2,1,2,3)$} -- (1.1,-2.1) node[below, xshift=0.3cm] {\scriptsize $C(2,1,3,2)$}-- (0.3,-2.1)node[below, xshift=-0.3cm] {\scriptsize $D(1,2,3,2)$}-- (0.3,-1.2) node[above, xshift=-0.8cm, yshift=-0.3cm] {\scriptsize $J(1,2,2,3)$}; \draw[red] (0.3,-2.1) -- (0.3,-1.2); \filldraw [red] (0.3,-2.1) circle (1pt) (0.3,-1.2) circle (1pt); \draw (-0.2,-0.5) -- (0.3,-1.2); \draw (1.1,-1.2) -- (0.3,-1.2); \draw (1.6,-0.5) -- (2.4,-2) node[above, xshift=0.3cm] {\scriptsize $F(5,1,1,1)$}; \draw (-0.2,-0.5) -- (-1,-2) node[above, xshift=-0.2cm] {\scriptsize $E(1,5,1,1)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (0.3,-2.1) -- (-0.2,-3)node[below] {\scriptsize $A(1,3,3,1)$} -- (1.6,-3) node[below] {\scriptsize $B(3,1,3,1)$}--(1.1,-2.1) ; \draw (-0.2,-3) -- (-1,-2); \draw (1.6,-3) -- (2.4,-2); \draw (2.8,-1.2) node {$+$}; \end{tikzpicture} & \hspace{-1.5cm} \begin{tikzpicture}[scale=1.4] \draw (1.1,-1.2)--(-0.2,-0.5) node[above, xshift=-0.2cm] {\scriptsize $T_1(1,3,1,3)$}-- (1.6,-0.5) node[above, xshift=0.2cm] {\scriptsize $T_8(3,1,1,3)$}--(1.1,-1.2) node[above, xshift=0.82cm, yshift=-0.3cm] {\scriptsize $T_2(2,1,2,3)$} -- (1.1,-2.1) node[above, xshift=0.4cm] {\scriptsize $T_4(2,1,3,2)$}-- (1.6,-3) node[below] {\scriptsize $T_5(3,1,3,1)$}-- (-0.2,-3)node[below] {\scriptsize $T_3(1,3,3,1)$}--(1.1,-2.1); \draw (1.6,-0.5) -- (2.4,-2) node[below, xshift=0.3cm] {\scriptsize $T_7(5,1,1,1)$}--(1.6,-3); \draw (-0.2,-0.5) -- (-1,-2) node[below, xshift=-0.3cm] {\scriptsize $T_6(1,5,1,1)$}; \draw [dashed](-1,-2) -- (2.4,-2); \draw (-0.2,-3) -- (-1,-2); \draw (-0.2,-0.5) -- (-0.2,-3); \end{tikzpicture} \end{tabular} \end{center} \caption{The polytopes $P_{\mathcal{B}_\beta}$ and $N_\beta$ for $\beta=\bigl\{\{1,2\},\{1\}\bigr\}$} \label{s:skica_a3} \end{figure} \begin{figure}[h!h!] \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[scale=1.35] \hspace{-0.15cm} \draw (0,0) node[above, xshift=-0.2cm] {\scriptsize $H_1(2,4,2,6)$} -- (1.4,0) node[above, xshift=0.2cm] {\scriptsize $G_3(4,2,2,6)$}; \draw (1.4,0) -- (1.4,-0.8) node[above, xshift=0.85cm, yshift=-0.2 cm] {\scriptsize $C_3(4,2,3,5)$} -- (1.7,-1.8) node[above, xshift=0.85cm, yshift=-0.2 cm] {\scriptsize $C_4(5,2,4,3)$}; \draw (1.7,-1.8)--(-0.3,-1.8)node[above, xshift=-0.85cm, yshift=-0.2 cm] {\scriptsize $D_2(2,5,4,3)$} -- (-0.7,-2.6)node[below] {\scriptsize $A_2(2,6,4,2)$} -- (2.1,-2.6) node[below] {\scriptsize $B_4(6,2,4,2)$}; \draw (0,0) -- (0,-0.8) node[above, xshift=-0.85cm, yshift=-0.1 cm] {\scriptsize $D_1(2,4,3,5)$} -- (-0.3,-1.8); \draw (0,-0.8) -- (1.4,-0.8) ; \draw (1.7,-1.8) -- (2.1,-2.6); \draw (1.4,0) -- (2.4,-2) node[below, xshift=0.4cm] {\scriptsize $F_5(8,2,2,2)$}; \draw (0,0) -- (-1,-2) node[below, xshift=-0.4cm] {\scriptsize $E_6(2,8,2,2)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (-0.7,-2.6) -- (-1,-2); \draw (2.1,-2.6) -- (2.4,-2); \end{tikzpicture} & \hspace{-1.5cm} \begin{tikzpicture}[scale=1.45] \hspace{-0.15cm} \draw (-0.2,-0.5) node[above, xshift=-0.2cm] {\scriptsize $H_1(2,6,2,6)$}-- (1.6,-0.5) node[above, xshift=0.2cm] {\scriptsize $G_8(6,2,2,6)$}--(1.1,-1.2) node[above, xshift=0.8cm, yshift=-0.3cm] {\scriptsize $I_2(4,2,4,6)$} -- (1.1,-2.1) node[above, xshift=0.8cm, yshift=-0.35cm] {\scriptsize $C_4(4,2,6,4)$}-- (0.6,-2.1)node[below, xshift=-0.3cm] {\scriptsize $D_4(3,3,6,4)$}-- (0.6,-1.2) node[above, xshift=-0.8cm, yshift=-0.3cm] {\scriptsize $J_2(3,3,4,6)$}--(0.05,-0.85) node[above, xshift=-0.8cm, yshift=-0.3cm] {\scriptsize $J_1(2,5,3,6)$}--(-0.2,-0.5)-- (-1,-2) node[above] {\scriptsize $E_6(2,10,2,2)$};; \draw (1.1,-1.2) -- (0.6,-1.2); \draw (1.6,-0.5) -- (2.4,-2) node[above, xshift=-0.2] {\scriptsize $F_7(10,2,2,2)$}; \draw [dashed] (-1,-2) -- (2.4,-2); \draw (-1,-2)-- (-0.2,-3)node[below] {\scriptsize $A_3(2,6,6,2)$} -- (1.6,-3) node[below] {\scriptsize $B_5(6,2,6,2)$}--(1.1,-2.1) ; \draw (1.6,-3) -- (2.4,-2); \draw (0.05,-0.85) -- (0.05,-2.55) node[above, xshift=-0.8cm, yshift=-0.3cm] {\scriptsize $D_3(2,5,6,3)$}-- (-0.2,-3); \draw (0.05,-2.55) -- (0.6,-2.1); \end{tikzpicture} \end{tabular} \end{center} \caption{The sums $P_{\mathcal{B}_\beta}+N_\beta$ for $\beta=\bigl\{\{1,2,4\},\{1,2\}\bigr\}$ and $\beta=\bigl\{\{1,2\},\{1\}\bigr\}$} \label{s:skica_rez} \end{figure} \end{exm} \begin{exm} Let $n=4$. If $\beta=\bigl\{\{1,2,3\},\{1,2\} \bigr\}$, then \\$\mathcal{B}_\beta=\beta \cup \bigl \{\{1\},\{2\},\{3\},\{4\},\{5\},\{1,2,3,4\},\{1,2,3,5\},[5] \bigr\}$ and $\mathcal{V}(P_{\mathcal{B}_\beta})$ is the set \[\begin{array}{rlllll} \bigl\{ (6,1,1,1,1), & (1,6,1,1,1), & (2,1,5,1,1), & (1,2,5,1,1), & (1,2,3,3,1), \\[0.7ex] (4,1,1,3,1), & (3,1,1,3,2), & (1,4,1,3,1), & (2,1,3,3,1), & (1,2,3,1,3), \\[0.7ex] (2,1,2,3,2), & (1,2,2,3,2), & (4,1,1,1,3), & (1,4,1,1,3), & (1,3,1,2,3), \\[0.7ex] (2,1,3,1,3), & (3,1,1,2,3), & (2,1,2,2,3), & (1,2,2,2,3), & (1,3,1,3,2) \bigr\}. \end{array}\] It implies that $m_\beta=9$ and $F_\beta$ is the quadrilateral whose vertices are the points in the last column from the set. Hence, $N_\beta$ is the convex hull of the remaining points. \end{exm} Finally, for $n\geqslant 2$ let \[ PA_{n,1}= \Delta_{[n+1]}+\sum_{\beta\in \mathcal{A}_1} \Delta_{\bigcup\beta}+ \sum_{\beta\in \mathcal{A}_2} N_\beta. \] The rest of this section is devoted to a proof of the following result. \begin{thm}\label{t_glavnat} $PA_{n,1}$ is an $n$-dimensional Minkowski-realisation of $C$. Moreover, \[ PA_{n,1} \simeq \mathbf{PA}_n. \] \end{thm} \begin{lem} \label{l_konstrukcije} If $N\in C_2$ is a maximal nested set which corresponds to a vertex of $F_\beta$, then $N$ is a maximal 0-nested set and $\beta\subseteq N$. \end{lem} \begin{proof} Let $v$ be a vertex of $F_\beta$ that corresponds to $N$. By Proposition~7.5 in \cite{P09}, $\lvert N \rvert =n$. Since $v\in f_B$ for every $B \in \beta$, we have that $\beta \subseteq N$. In particular, if $k=n$, then $N=\beta$. Otherwise, by Definition~\ref{d_nested_skup_u_odnosu_na_bilding_skup}, $N$ is obtained by enlarging $\beta$ with $n-k$ elements of $\mathcal{B}_\beta-[n+1]$ such that the union of every $N$-antichain belongs to $C_2-\mathcal{B}_\beta$. Let $B_1$ and $B_2$ be two non-singleton elements of $\mathcal{B}_\beta-\{[n+1]\}$, which are contained in $N$ and incomparable. Then $\{\{B_1\},\{B_2\}\}$ is an $N$-antichain such that $B_1\cup B_2\supseteq \beta_{max}$ or $\beta_{min} \supseteq B_1\cup B_2$. Therefore, $N$ does not contain such a pair. Then, let us suppose that $N$ contains two or more singletons. For every singleton $B=\{i_{j}\}$, where $i_{j}\in \beta_{max}-\beta_{min}$, there is $ A \in \beta$, such that $\{\{A\},\{B\}\}$ is an $N$-antichain whose union belongs to $\beta$. Similarly, for every singleton $B=\{i_{j}\}$, where $i_{j}\in [n+1]-\beta_{max}$, we have that $\{\{\beta_{max}\},\{B\}\}$ is an $N$-antichain whose union has $\beta_{max}$ as a subset, and therefore, belongs to the building set. It remains to check singleton subsets of $\beta_{min}$. The union of every pair of singleton subsets of $\beta_{min}$, is also a subset of $\beta_{min}$, i.e.\ belongs to the building set. Then, just one of the singleton subsets of $\beta_{min}$ can belong to $N$ and all the other subsets of $N$ are mutually comparable. \end{proof} Finally, let $\pi_{\beta,c}$ be the hyperplane $$x_{i_1}+2x_{i_2}+\ldots+k(x_{i_k}+\ldots+x_{i_{k+l}})=m_\beta+c,$$ where $c\in (0,1]$, and $a_{\beta}$ be an outward normal to the halfspaces $\pi_{\beta,c}^{\geqslant}$. \begin{rem}\label{r_haovi} For every $c\in (0,1]$, the sum of outward normal vectors to the facets that contains $F_\beta$ is an outward normal vector to the halfspace $\pi_{\beta,c}^{\geqslant}$. \end{rem} \begin{lem}\label{l_glavnal} For an element $\beta$ of $\mathcal{A}_2$, a polyope $$P_{\mathcal{B}_\beta}\bigcap {\pi_{\beta,c}}^\geqslant$$ is an $F_\beta$-deformation of $P_{\mathcal{B}_\beta}$. Moreover, the following holds: \[ P_{\mathcal{B}_\beta}\bigcap {\pi_{\beta,c}}^\geqslant = \begin{cases} \emph{a parallel truncation }\emph{tr}_{F_\beta}P_{\mathcal{B}_\beta},& c\in (0,1)\\ N_\beta, & c=1. \end{cases} \] \end{lem} \begin{proof} Firstly, recall that each coordinate of some vertex of $P_{\mathcal{B}_\beta}$ is a natural number. This, together with Proposition~\ref{p_prekominimumaFbeta} and Corollary~\ref{c_prekominimumaNbeta}, implies that ${\pi_{\beta,c}}^\geqslant$ is beyond every vertex of $F_\beta$ and beneath every vertex of $N_\beta$, for every $c \in (0,1)$. Hence, $P_{\mathcal{B}_\beta}\bigcap {\pi_{\beta,c}}^\geqslant$ is a truncation of $P_{\mathcal{B}_\beta}$ in its face $F_\beta$. Since $\pi_{\beta,0}$ defines $F_\beta$, all these truncations are parallel, and hence, all of them are $F_\beta$-deformations of $P_{\mathcal{B}_\beta}$ (see Example~\ref{e_trunkacijajedeformacija}). Also, \[ P_{\mathcal{B}_\beta}\bigcap {\pi_{\beta,1}}^\geqslant =N_\beta. \] Let $\{U,W\}$ be a partition of the set $\mathcal{V}(N_\beta)$ such that all the elements of $U$ are adjacent to $F_\beta$ in $P_{\mathcal{B}_\beta}$. To be precise, $u\in U$ if and only if there exists $v\in \mathcal{V} (F_\beta)$ such that $u$ and $v$ are adjacent in $P_{\mathcal{B}_\beta}$. Let $u=(u_1,\dots,u_{n+1})$ be an element of $U$ and $v=(v_1,\dots,v_{n+1})$ be a vertex of $F_\beta$ adjacent to $u$ in $P_{\mathcal{B}_\beta}$. There are two maximal nested sets $N_v,N_u\in C_2$ corresponding to $v$ and $u$, respectively. By Lemma~\ref{l_konstrukcije}, $N_v$ is a maximal 0-nested set containing $\beta$ as a subset. Since $\lvert N_v \rvert =\lvert N_u \rvert =n$ (see \cite[Proposition~7.5]{P09}) and $u$, $v$ are adjacent, we have that $\lvert N_v \cap N_u \rvert =n-1$, which entails that $N_u$ can be obtained from $N_v$ by substituting an element $S_v$ for another element $S_u$ of $(\mathcal{B}_\beta-[n+1])-\beta$. Since $\beta \nsubseteq N_u$, $S_v \in \beta$. Moreover, following the proof of Lemma~\ref{l_konstrukcije}, we can verify that $S_u$ is a singleton. We conclude that for two distinct vertices of $F_\beta$, there is no element of $U$ adjacent to both of them. Now, we show that $u\in \pi_{\beta,1}$, for every $u \in U$. Let $N_v$ be \[ \bigl\{\{i_{n},\ldots,i_1\},\dots,\{i_{n},i_{n-1}\},\{i_n\} \bigr\} \] such that $\beta\subseteq N$. From $v\in F_\beta$ and $u\notin F_\beta$, we have that $$ m_\beta(v)=v_{i_1}+2v_{i_2}+\ldots+k(v_{i_k}+\ldots+v_{i_{k+l}})=m_\beta$$ and $$m_\beta(u)=u_{i_1}+2u_{i_2}+\ldots+k(u_{i_k}+\ldots+u_{i_{k+l}})>m_\beta .$$ For every element $A$ of the set $N_v \cap N_u=N_v-\{S_v\}=N_u-\{S_u\}$, we have that $u,v \in f_A$. This, together with Proposition~\ref{p_burstabernejednakosti}, entails the following: \[\begin{array}{lr} (\star) & \sum\limits_{i\in A}u_i=\sum\limits_{i\in A} v_i=\lvert \mathcal{B}_{\beta \vert A} \rvert. \end{array}\] From Proposition~\ref{p_burstabernejednakosti}, we also have \[\begin{array}{lr} (\star\star) & u_1+\ldots+u_{n+1}=v_1+\ldots+v_{n+1}=\lvert \mathcal{B}_{\beta \vert [n+1]} \rvert . \end{array}\] Let $S_v=\{i_{k+l},\ldots,i_{p}\}$, where $1\leqslant p \leqslant k$. Then, by $(\star)$, \[ m_\beta(u)=m_\beta-(v_{i_p}+v_{i_{p+1}}+\ldots+v_{i_{k+l}})+(u_{i_p}+u_{i_{p+1}}+\ldots+u_{i_{k+l}}). \] Let us analyse all possible cases. \vspace{2ex} (1) If $S_v$ is a singleton, i.e.\ $p=k=n$ and $l=0$, then \[ m_\beta(u)= m_\beta-v_{i_k}+u_{i_k}. \] Since $\{i_k\}\in N_v$, by Proposition~\ref{p_burstabernejednakosti}, $v_{i_k}=\lvert \mathcal{B}_{\beta \vert \{i_k\}} \rvert =1$. Following the proof of Lemma~\ref{l_konstrukcije}, one may verify that $N_u$ is a nested set if and only if $S_u=\{i_{k-1}\}$. This entails that $u \in f_{\{i_{k-1}\}}$, i.e.\ applying Proposition~\ref{p_burstabernejednakosti}, $$ u_{i_{k-1}}=\lvert \mathcal{B}_{\beta \vert \{i_{k-1}\}}\rvert =1. $$ Having that $\{i_k,i_{k-1}\}\in N_u \cap N_v$ and applying Proposition~\ref{p_burstabernejednakosti}, we obtain \[ u_{i_k}=\lvert \mathcal{B}_{\beta \vert \{i_k,i_{k-1}\}} \rvert -u_{i_{k-1}}=3-1=2. \] Therefore, $m_\beta(u)= m_\beta-1+2=m_\beta+1$. \vspace{2ex} (2) If $S_v$ is not a singleton, then $\{i_{k+l},\ldots,i_{p+1}\} \in N_v\cap N_u$. Applying $(\star)$, we obtain \[ u_{i_{p+1}}+\ldots+u_{i_{k+l}}=v_{i_{p+1}}+\ldots+v_{i_{k+l}}, \] which implies \[ m_\beta(u)=m_\beta-v_{i_p}+u_{i_p}. \] Having that $S_v,\{i_{k+l},\ldots,i_{p+1}\}\in N_v$ and applying Proposition~\ref{p_burstabernejednakosti}, we get the following equations: \[ v_{i_p}=\lvert \mathcal{B}_{\beta \vert S_v} \rvert -\lvert \mathcal{B}_{\beta \vert S_v-\{i_p\}}\rvert =\lvert \mathcal{B}_{\beta \vert S_v-\{i_p\}}\rvert +2-\lvert \mathcal{B}_{\beta \vert S_v-\{i_p\}}\rvert =2. \] \vspace{1ex} (2.1) If $\lvert S_v \rvert \neq n$, then we follow the proof of Lemma~\ref{l_konstrukcije} in order to analyse the form of $N_u$. In that manner, we conclude that $N_u$ is a nested set if and only if $S_u=\{i_{p-1}\}$. This entails that $u \in f_{\{i_{p-1}\}}$, i.e.\ applying Proposition~\ref{p_burstabernejednakosti}, $$ u_{i_{p-1}}=\lvert \mathcal{B}_{\beta \vert \{i_{p-1}\}} \rvert =1. $$ Having that $\{i_{k+l},\ldots,i_{p-1}\},\{i_{k+l},\ldots,i_{p+1}\} \in N_v \cap N_u$ and applying $(\star)$, we obtain the following equations: \[ u_{i_{k+l}}+\ldots+u_{i_{p-1}}=\lvert \mathcal{B}_{ \beta \vert S_v\cup \{i_{p-1}\}} \rvert =\lvert \mathcal{B}_{\beta \vert S_v} \rvert +2, \] \[ u_{i_{k+l}}+\ldots+u_{i_{p+1}}=\lvert \mathcal{B}_{\beta \vert S_v - \{i_p\}} \rvert =\lvert \mathcal{B}_{\beta \vert S_v} \rvert -2. \] Hence, $ u_{i_{p}}+u_{i_{p-1}}=4$, and therefore, $$m_\beta(u)=m_\beta-2+(4-1)=m_\beta+1.$$ \vspace{1ex} (2.2) If $\lvert S_v \rvert = n$, i.e.\ $p=1, k+l=n$, then we again follow the proof of Lemma~\ref{l_konstrukcije} and conclude that $N_u$ is a nested set if and only if $S_u=\{i_{n+1}\}$. Therefore, $u \in f_{\{i_{n+1}\}}$, i.e.\ applying Proposition~\ref{p_burstabernejednakosti}, $$ u_{i_{n+1}}=\lvert \mathcal{B}_{\beta \vert \{i_{n+1}\}} \rvert =1. $$ Having that $\{i_{n},\ldots,i_{2}\} \in N_v \cap N_u$ and applying $(\star)$, we obtain that \[ u_{i_{n}}+\ldots+u_{i_{2}}=\lvert \mathcal{B}_{\beta \vert S_v - \{i_{1}\}} \rvert =\lvert \mathcal{B}_{\beta \vert S_v} \rvert -2=\lvert \mathcal{B}_{\beta \vert [n+1]} \rvert -4. \] This, together with $(\star\star)$, entails that \[ u_{i_{n+1}}+u_{i_{1}}=\lvert \mathcal{B}_{\beta \vert [n+1]} \rvert -\bigl(\lvert \mathcal{B}_{\beta \vert [n+1]} \rvert -4 \bigr)=4. \] Hence, $m_\beta(u)=m_\beta-v_{i_1}+u_{i_1}= m_\beta-2+(4-1)=m_\beta+1$. \vspace{2ex} All this entails that for every $u \in U$, $m_\beta(u)=m_\beta+1$, i.e.\ $conv U \in \pi_{\beta,1}$. We can conclude that $N_\beta \cap \pi_{\beta,1}$ is exactly $conv U$. Otherwise, i.e.\ if there would exist an element $w\in W$ contained in $\pi_{\beta,1}$, then since $w$ is not adjacent to $F_{\beta}$, this vertex of $P_{\mathcal{B}_\beta}$ would be contained in the convex hull of $U$, which would be contradiction. Also, note that $\mathcal{V}(conv U)=U$, because each element of $U$ is a vertex of $P_{\mathcal{B}_\beta}$. Therefore, $conv U$ is an $(n-1)$-face of $N_\beta$. Let $c$ be an arbitrary element of the interval $(0,1)$ and let us denote by $f$ the facet of the truncation $P_{\mathcal{B}_\beta} \cap \pi_{\beta,c}^{\geqslant}$ contained in the truncation hyperplane. Since we have already concluded that for two distinct vertices of $F_\beta$ there is no element of $U$ adjacent to both of them, we now can conclude that for two distinct vertices of $f$ there is no element of $U$ adjacent to both of them in the truncation. Hence, since $\pi_{\beta, c}$ and $\pi_{\beta,0}$ are parallel, $conv U$ is a translate of $f$. In other words, $N_\beta$ can be obtained from the truncation by parallel translation of the facet $f$ without crossing the vertices. According to Remark~\ref{r_deformacijagruboreceno}, $N_\beta$ is an $F_\beta$-deformation of $P_{\mathcal{B}_\beta}$. \end{proof} \begin{rem}\label{r_dimenzijaNbeta} According to Definition~\ref{d_nested_skup_u_odnosu_na_bilding_skup}, the set \[ \bigl\{\{i_1\},\ldots,\{i_{n+1}\} \bigr\}-\bigl\{\{i_n\} \bigr\} \] is a maximal nested set, which corresponds to some element of the set $W$ (defined in the previous proof). Hence, $W\neq \emptyset$, i.e.\ $N_\beta$ is an $n$-polytope with the facet $conv U\in \pi_{\beta,1}$. \end{rem} \begin{lem}\label{l_normalezaglavnut} Let $A=\{a_1,\ldots,a_n\}$ be the spanning set of vectors for an $n$-cone in $\mathbf{R}^n$, and let $h_I$ be the vector defined as $$h_I=\sum\limits_{i\in I} a_i,$$ where $I\subseteq [n]$ and $\lvert I \rvert \geqslant2$. The following claims hold. \begin{enumerate} \item[\emph{(i)}] For every two subsets $I,J \subseteq [n]$ such that $I\subset J$, the vector $h_J$ is contained in the cone spanned by the set $\{h_I\}\cup\{a_i \mid i\in J-I\}$. \item[\emph{(ii)}] For $2 \leqslant m\leqslant n-1$, let $I_1,\ldots,I_{m} \subseteq [n]$ such that $I_1 \supset I_2 \supset\ldots \supset I_{m}$, and let $A_1$ be the set obtained from $A$ by replacing $m$ elements with the vectors $h_{I_1},\ldots,h_{I_{m}}$. If $A_1$ spans an $n$-cone $N_1$, then for every $1\leqslant k<m$ there is exactly one element $i\in I_k-I_{k+1}$ such that the set obtained from $A_1$ by replacing $h_{I_k}$ with $a_{i}$, spans an $n$-cone $N_2$ which contains $N_1$. \end{enumerate} \end{lem} \begin{proof} The first claim follows directly from the fact that $h_J=h_I+\sum\limits_{i \in J-I}a_i$. For $1\leqslant k<m$, let $\Delta_k=I_k-I_{k+1}$. Since $I_1 \supset I_2 \supset\ldots \supset I_{m}$, the sets $\Delta_k$ are mutually disjoint. Since $h_{I_m}$ is a spanning ray of $N_1$, there is at least one element of the set $\{a_i\mid i \in I_m\}$ which is not contained in $A_1$. Also, at least one element of the set $\{a_i\mid i \in \Delta_k\}$ does not belong to $A_1$; otherwise, by the claim (i), $h_{I_k}$ would not be a spanning ray of $N_1$. If we suppose that for some $k$ there are two or more such elements, then there is more than $m$ elements of $A$ that are not contained in $A_1$. This contradicts the assumption $\lvert A-A_1\rvert=n-m$. Using the claim (i), it remains to conclude that $N_2$ is an $n$-cone which contains each of the spanning vectors of $N_1$, i.e.\ which contains $N_2$. \end{proof} \begin{proof} [Proof of Theorem~\ref{t_glavnat}] It is obvious that $PA_{n,1}$ is well formed according to Defnition~\ref{d_Mink_realizacija}(ii). Since each summand is either $d$-simplex $\Delta_{\bigcup\beta}$, $d<n$, or $n$-polytope $N_{\beta}$ (see Remark~\ref{r_dimenzijaNbeta}), applying Corollary~\ref{prop_minkowski_svojstva2}(ii) we conclude that the sum $PA_{n,1}$ is at least $n$-dimensional. Let us consider the partial sum \[ S_0=\Delta_{[n+1]}+\sum_{\beta\in \mathcal{A}_1} \Delta_{\bigcup\beta}. \] It is an $n$-permutohedron (see the end of Section~3). More precisely, by Proposition~\ref{p_burstabernejednakosti} (or \cite[Theorem~1.5.4]{BP15}), $S_0$ is the intersection of the following $l=2^{n+1}-2$ halfspaces \[ \alpha_j^{\geqslant}: \langle a_j,x\rangle\geqslant b_j, \; \; 1\leqslant j\leqslant l, \] where for every $j \in[l]$ there is $B\subset[n+1]$ such that $\alpha_j^{\geqslant}$ is the halfspace \[ H_{B}^{\geqslant}=\bigl\{x \in \mathbf{R}^{n+1} \mid \sum_{i\in B} x_i\geqslant 2^{\lvert B \rvert }- 1\bigr\}. \] Each of these halfspaces is facet-defining, i.e.\ determines the facet $f_B$ of $S_0$. According to Definition~\ref{d_Mink_realizacija}, $S_0$ is an $n$-dimensional Minkowski-realisation of the simplicial complex $C_1$. Let us relabel its facets by the corresponding elements of $\mathcal{A}_1$--the facet $f_B$ is labelled by $\{B\}$. By Corollary~\ref{c_labele_odPA_n}, each of them is parallel to the equilabelled facet of $\mathbf{ PA}_n$. Before we show that indexed set of the remaining summands is a truncator set of summands for this permutohedron, we firstly show that for an arbitrary $\beta \in \mathcal{A}_2$, $N_{\beta}$ is a truncator summand for $S_0$. Recall that we can obtain $S_0$ by a sequence of parallel truncations of the nestohedron $P_{\mathcal{B}_{\beta}}$, up to normal equivalence. In other words, $P_{\mathcal{B}_{\beta}}$ can be obtained from $S_0$ by a sequence of parallel translations of the facets without crossing over the vertices. Formally, $P_{\mathcal{B}_{\beta}}$ can be defined as the intersection of the halfspaces \[ \gamma_j^{\geqslant}:\langle a_j,x\rangle\geqslant c_j, \; 1 \leqslant j \leqslant l, \] such that for every vertex of $S_0$ which is the intersection of the hyperplanes $\alpha_j$, $j \in J\subset [l]$, the intersection of the hyperplanes $\gamma_j$, $j \in J$, is a vertex of $P_{\mathcal{B}_{\beta}}$. Since $F_{\beta}$ is the intersection of the facets of $P_{\mathcal{B}_{\beta}}$ indexed by the elements of $\beta$ which are mutually comparable, there exists the corresponding the same dimensional face $F$ of $S_0$ (the intersection of the facets indexed by the same elements), i.e.\ for every facet of $P_{\mathcal{B}_{\beta}}$ containing $F_\beta$, there is the corresponding facet of $S_0$ containing $F$ with the same outward normals. Then, applying Lemma~\ref{l_glavnal} for some \mbox{$c\in (0,1)$}, we conclude that there is a parallel truncation tr$_FS_0=S_0 \cap \alpha_0^{\geqslant}$ such that $a_\beta$ is an outward normal to $\alpha_0^{\geqslant}$. This, together with Definition~\ref{d_pideformacija}(ii) and the fact that $N_{\beta}$ is an $F_{\beta}$-de\-for\-ma\-tion of the nestohedron, implies that $N_{\beta}$ can be obtained from tr$_FS_0$ by parallel translations of the facets without crossing over the vertices. Also, by Definition~\ref{d_pideformacija}(i), we have that $S_0\cap \alpha_0 \simeq P_{\mathcal{B}_{\beta}} \cap \pi_{\beta,c} \simeq N_{\beta} \cap \pi_{\beta,1}$. Therefore, according to Definition~\ref{d_pideformacija}, $N_{\beta}$ is an $F$-deformation of $S_0$, and hence, by Proposition~\ref{l_deformacijajesingleprofinjenje}, $N_{\beta}$ is a truncator summand for $S_0$ in $F$, i.e.\ $S_0+N_{\beta} \simeq \text{tr}_FS_0$. Now, for $m=\lvert \mathcal{A}_2, \rvert $ let $x: [m]\longrightarrow \mathcal{A}_2$ be an indexing function such that $\lvert x(i) \rvert \geqslant \lvert x(j) \rvert$ for every $i<j$. Then, let $\{Q_i\}_{i \in [m]}$ be an indexed set of polytopes such that $Q_i=N_{x(i)}$. We show that this indexed set is a truncator set of summands for the permutohedron $S_0$, which entails that Definition~\ref{d_Mink_realizacija}(iii) is satisfied. Starting from the permutohedron $S_0$, let $S_1$ be the partial sum $S_0+Q_1$, and for the sake of simplicity, let $\beta$ denotes $x(1)$. From the above conclusion, we have that $S_1 \simeq \text{tr}_FS_0$, where tr$_FS_0$ is a parallel truncation in the face that corresponds $F_\beta$ and $a_{\beta}$ is an outward normal to the truncation halfspace. We iteratively repeat the following for every $2\leqslant i \leqslant m$. To be precise, at $i$th step, let $S_{i}=S_{i-1}+Q_{i}$ and suppose that for every $j<i$ we have that \mbox{$S_{j}\simeq \text{tr}_FS_{j-1}$}, where tr$_FS_{j-1}$ is a parallel truncation in the face that corresponds to $F_{x(j)}$. Again, let $\beta=x(i)$. As long as the cardinality of $\beta$ is maximal, i.e.\ as long as $F_{\beta}$ and the corresponding face $F$ of $S_{i-1}$ are vertices, we may apply completely analogous reasoning as above and obtain that $Q_i$ is an $F$-de\-for\-mation of $S_{i-1}$, which, together with Proposition~\ref{l_deformacijajesingleprofinjenje}, implies $S_{i}\simeq \text{tr}_FS_{i-1}$. Since $\mathcal{B}_1$ contains all maximal 0-nested sets, we can notice that each vertex of $S_0$ is truncated. Suppose that $k=\dim(F_\beta)>0$. Since all truncations at the previous steps were in faces of lower dimensions, there exists the corresponding face $F$ of $S_{i-1}$. Also, since all truncations were parallel and the normal equivalences held, $P_{\mathcal{B}_\beta}$ still can be obtained from $S_{i-1}$ by parallel translations of the facets without crossing over the vertices, but the second part of Definition~\ref{d_pideformacija}(i) does not hold generally. It means that we can not conclude that $N_\beta$ is an $F$-deformation of $S_{i-1}$. Thus, in order to prove that $S_{i}\simeq \text{tr}_FS_{i-1}$ still holds, we use Proposition~\ref{p_dovoljnomakskonuseposmatrati}. Since $Q_i$, $S_{i-1}$ and tr$_FS_{i-1}$ are $n$-polytopes, without lose of generality, we consider the union of all normal cones in each of their fans as $\mathbf{R}^n$. Firstly, we consider all normal cones to $Q_i$ at vertices not contained in $\pi_{\beta,1}$. Let $N$ be one of them. Since $Q_i$ is an $F'$-deformation of $S_0$, where $F'$ is the corresponding face of $S_0$, by Lemma~\ref{l_zvezda}, $N$ is the union of the normal cones to tr$_{F'}S_0$ at vertices not contained in the truncation hyperplane, which are the normal cones to $S_0$ at vertices not contained in $F'$. Since $S_{i-1}$ is obtained from $S_0$ by the sequence of parallel truncations, up to normal equivalence, by Lemma~\ref{l_konusiparalelnetrunkacije}, $N$ is the union of the normal cones $N_1,\ldots,N_t$ to $S_{i-1}$ at vertices not contained in $F$. Then, for every $i\in [t]$, $N \cap N_i$ is $N_i$, the normal cone to tr$_FS_{i-1}$ at a vertex not contained in the truncation hyperplane. Therefore, Proposition~\ref{p_dovoljnomakskonuseposmatrati}(ii) is satisfied for the considered maximal normal cones to $Q_i$. Now, let $N$ be the normal cone to $Q_i$ at a vertex contained in $\pi_{\beta,1}$. By Lemma~\ref{l_zvezda}, $N=N'\cup N_v$, where $N_v$ is the normal cone to tr$_{F'}S_0$ at a vertex contained in the truncation hyperplane, while $N'$ is the union of the normal cones to tr$_{F'}S_0$ at vertices not contained in that hyperplane, i.e.\ the union of the same normal cones to $S_0$ at vertices not contained in $F'$. As above, $N'$ is the union on maximal normal cones to $S_{i-1}$ which are the normal cones to tr$_{F}S_{i-1}$ at vertices not contained in the truncation hyperplane, and hence, their intersections with $N$ are these cones themselves. Any other $n$-cone, which can be obtained as the intersection of $N$ and some maximal normal cone to $S_{i-1}$, is the intersection of that cone and $N_v$. In order to analyse such cases, let $u$ be a vertex of $S_0$ contained in $F'$. Since $S_0$ is simple polytope, its $k$-face $F'$ belongs to exactly $p=n-k$ facets. Without lose of generality, we assume that they are defined by the halfspaces $\{\alpha_j^{\geqslant} \mid j \in [p]\}$, while $u$ is the intersection of the hyperplanes $\{\alpha_j\mid j \in [n]\}$. Then, every element of the set $\{-a_j \mid j \in [p]\}$ is a spanning ray of the normal cone to $S_{i-1}$ at a vertex contained in $F$. Since $F'$ is also simple, there are exactly $k$ vertices adjacent to $u$ in $F'$, which implies that there are exactly $p$ vertices of $S_0$ adjacent to $u$ that do not belong to $F'$. Hence, by Lemma~\ref{l_konusiparalelnetrunkacije}, $N_u(S_0)$ is the union of $p$ normal cones $N_{v_1},...,N_{v_p}$ to tr$_{F'}S_0$ at the corresponding vertices contained in the truncation hyperplane. Exactly their intersections with an arbitrary maximal normal cone $M$ to $S_{i-1}$ remain to be considered. In order to prove that these intersections also satisfy Proposition~\ref{p_dovoljnomakskonuseposmatrati}(ii), it is enough to show the following: if $M$ is the normal cone to $S_{i-1}$ at a vertex not contained in $F$, i.e.\ if one of the element of $\{-a_j \mid j \in [p]\}$ is not a spanning ray of $M$, then $M$ is contained in $N_{v_j}$ for some $j\in[p]$, which entails that its intersection with $N_{v_j}$ is $M$ itself while the other intersections are not maximal cones; otherwise, i.e.\ if $\{-a_j \mid j \in [p]\}$ is a subset of the spanning set of $M$, then for every $j\in [p]$ the intersection $N\cap N_{v_j}$ is an $n$-cone with $a_\beta$ as a spanning ray, and moreover, the union of all these cones is $M$. For $j\in[p]$, let $N_{v_j}$ be spanned by the set $\{a_\beta\} \cup \{-a_i \mid i\in [n]-\{j\}\}$. Recall again that $N_u(S_0)$ is the union of maximal normal cones to $S_{i-1}$ according to Lemma~\ref{l_konusiparalelnetrunkacije}. Namely, the other faces of $S_0$ containing $u$ might be truncated at the previous steps, where the truncation in the vertex $u$ was the first of them. Assume that $h_1$ is an outward normal to the halfspace of that truncation. By Lemma~\ref{l_konusiparalelnetrunkacije}, that truncation produced $n$ maximal normal cones $N_1,\ldots,N_n$ whose union is $N_u(S_0)$, and then, other truncations refined these cones further. We may assume that $N_j$, $j \in [n]$, is spanned by the set $\{h_1\} \cup\{-a_i\mid i\in[n]-\{j\}\}$. Then, applying Remark~\ref{r_haovi} and Lemma~\ref{l_normalezaglavnut} for $I=[p]$ and $J=[n]$, we obtain that $h_1$ is contained in the cone spanned by the set $\{-a_{p+1},-a_{p+2},\ldots,-a_n,a_{\beta}\}$, which entails that for every $j\in[p]$ each spanning ray of $N_j$ is contained in $N_{v_j}$. Hence, $N_j \subseteq N_{v_j}$, i.e.\ every maximal normal cone to $S_{i-1}$ contained in $N_j$ is contained in $N_{v_j}$. Now, if $M$ is one of the remaining maximal cone to $S_{i-1}$ contained in $N_u(S_0)$, then $M$ is spanned by the set obtained from $\{-a_j \mid j \in [n]\}$ by replacing $q$ elements with some vectors $h_1,\ldots,h_q$, such that each of them is an outward normal to the corresponding truncation halfspace. All these truncations were made at some of the previous steps in a face of $S_0$ contained in $F'$. Moreover, all these faces (as well as the corresponding elements of $\mathcal{A}_2$) are mutually comparable. Therefore, by Remark~\ref{r_haovi}, assuming that $h_1=-(a_1+\ldots+a_n)$ and that $h_q$ corresponds to the face witch contains all the others, we conclude that the conditions of Lemma~\ref{l_normalezaglavnut}(ii) are satisfied. Now, we have two cases. Firstly, let us assume that $M$ corresponds to a vertex of $S_{i-1}$ not contained in $F$. Then, for some $j \in [p]$ the ray $-a_j$ is not a spanning ray of $M$. By applying Lemma~\ref{l_normalezaglavnut}(ii) $q-1$ times, we replace the vectors $h_1,\ldots,h_{q-1}$ by the corresponding elements $-a_i$, $p<i\leqslant n$, and obtain that $M$ is contained in the $n$-cone $M'$ spanned by the set $\{h_q\}\cup\{-a_j\mid j \in[n]-\{r\}\}$ for some $r\in[p]$. Since the cone $N_{v_r}$ is spanned by the set \mbox{$\{a_\beta\}\cup\{-a_j\mid j \in[n]-\{r\}\}$}, applying Lemma~\ref{l_normalezaglavnut}(i), we conclude that $M'\subseteq N_{v_r}$, and hence, $M\subseteq N_{v_r}$. Otherwise, i.e.\ if $M$ corresponds to some vertex of $S_{i-1}$ contained in $F$, then for every $j \in [p]$, $-a_j$ is a spanning ray of $M$. By Remark~\ref{r_haovi}, $M$ is the union of $n$-cones $N_1,\ldots,N_p$ such that the spanning set of $N_{j'}$, $j'\in[p]$, can be obtained from the spanning set of $M$ by replacing the ray $-a_{j'}$ with the ray $a_\beta$. Now, as above, for each $N_{j'}$ we apply Lemma~\ref{l_normalezaglavnut}(ii) $q$ times. Namely, by replacing the vectors $h_1,\ldots,h_{q}$ with the corresponding elements $-a_i$, where $p<i\leqslant n$, we obtain that $N_{j'}$ is contained in an $n$-cone spanned by the set \mbox{$\{a_\beta\}\cup\{-a_j\mid j \in[n]-\{j'\}\}$}. Since this set spans $N_{v_{j'}}$, we can conclude that for every $j\in [p]$ the intersection $M \cap N_{v_j}$ is $N_{v_j}$, the normal cone to tr$_FS_{i-1}$ at some vertex contained in the truncation hyperplane, and moreover, the union of all these intersections is $M$. Finally, we can conclude that Proposition~\ref{p_dovoljnomakskonuseposmatrati}(ii) holds. Also, all above, together with Lemma~\ref{l_konusiparalelnetrunkacije}, one can verify that there is no vertex of tr$_FS_{i-1}$ which is not obtained in some of the mentioned intersections, i.e.\ the remaining condition is also satisfied. It remains to apply Proposition~\ref{p_dovoljnomakskonuseposmatrati} concluding that $S_{i} \simeq$ tr$_FS_{i-1}$, i.e.\ $Q_{i}$ is a truncator summand for $S_{i-1}$. For every $i\in [m]$, at the end of \textit{i}th step, we label facets of $S_i$ in the following manner: the corresponding facets of $S_i$ and $S_{i-1}$ are equilabelled, while the new appeared facet is labelled by $x(i)$ (see Remark~\ref{r_correspondingfacettruncation}). At the end, we get $n$-polytope $PA_{n,1}$ as the last obtained sum $S_{m}$. Since for every $i\in[m]$, $Q_{i}$ is a truncator summand for $S_{i-1}$, Definition~\ref{d_Mink_realizacija}(iii) is satisfied. Also, every element of $\mathcal{B}_1$ is used as label for a facet of $PA_{n,1}$ such that equilabelled facets of $PA_{n,1}$ and $\mathbf{PA}_n$ are parallel. This, together with Corollary~\ref{c_labele_odPA_n}, implies $PA_{n,1} \simeq \mathbf{PA}_n$, and hence, Definition~\ref{d_Mink_realizacija}(i) is also satisfied. \end{proof} By Corollary~\ref{prop_minkowski_svojstva2}(i), $^M PA_2=PA_{2,1}$ holds, up to translation. Applying Lemma~\ref{l_glavnal} and following the proof of the previous theorem, we obtain the following family of $n$-dimensional Minkowski-realisations of the simplicial complex~$C$. \begin{thm}\label{t_glavnatfamilija} For $n\geqslant 2$ and $c\in (0,1]$, the polytope \[ PA_{n,c}= \Delta_{[n+1]}+\sum_{\beta\in \mathcal{A}_1} \Delta_{\bigcup\beta}+ \sum_{\beta\in \mathcal{A}_2} \bigl (P_{\mathcal{B}_\beta}\bigcap {\pi_{\beta,c}}^\geqslant \bigr ) \] is an $n$-dimensional Minkowski-realisation of the simplicial complex $C$, which is normally equivalent to $\mathbf{PA}_n$. \end{thm} \begin{center}\textmd{Acknowledgements} \end{center} \medskip I thank Zoran Petri\' c for support and very helpful discussions during my entire research. The paper is also supported by the Grants $III44006$ of the Ministry for Education and Science of the Republic of Serbia.
{ "timestamp": "2020-03-05T02:10:58", "yymm": "1904", "arxiv_id": "1904.06700", "language": "en", "url": "https://arxiv.org/abs/1904.06700" }
\section{INTRODUCTION} In practice, under the influence of disturbances and uncertainties, control performance of a dynamical system can deteriorate. To overcome this problem, sliding mode control has proved a powerful tool to reject disturbances and uncertainties using discontinuous control action with infinite switching frequency \cite{edwards1998}. This is applicable for continuous-time systems. Nowadays, the extensive use of digital devices in control systems necessiates the study of sample/hold effects when designing a control algorithm. Due to hardware limit, there is no control action with such infinite switching frequency as in continuous-time systems. Whilst, theoretically, sliding mode control has the ability to reject matched external disturbances or uncertainties for continuous-time systems \citep{DRAZENOVIC1969,edwards1998}, an ideal sliding mode cannot be obtained in \emph{sampled-data systems} due to the sampling/hold effect. In this situation, only ``quasi sliding modes'' are achieved, i.e. the system state is kept in a boundary layer around the sliding surface \citep{milo85}. Numerous research works have been conducted addressing the problem of state feedback sliding mode control of sampled-data systems; see \citep{su,abxu07,Du2016,Niu_IETCTA_2010,Xu_IETCTA_2013,Behera_IETCTA_2015} and the references therein. The most common feature is that control laws are chattering-free and maintain an $O(T^2)$ quasi-sliding motion. In \citep{su}, a non-switching control method for a class of \emph{sampled-data systems} was exploited to avoid the chattering phenomena during the quasi sliding mode phase. In the state feedback sliding mode control problem, a dead-beat type control law based on the one-step delayed disturbance approximation method results in a quasi sliding mode boundary layer of thickness $O(T^2)$, where $T$ is the sampling period \citep{su}. With this accuracy of quasi sliding mode, the state is kept in ultimate $O(T)$ bound \citep{abxu07}. An $O(T^2)$ quasi sliding mode can be obtained in \emph{sampled-data systems} in the context of state feedback \citep{abxu07}. In this note, we aim to address the output feedback sliding mode control problem for linear sampled-data multi-input multi-output systems in the presence of external disturbances. Some papers in the literature proposed several output feedback sliding mode control methods for sampled-data systems in \cite{Lin2010,lai07,nguyen09b,Milo2013, Nguyen2016}. The methods in \cite{Lin2010,Milo2013} were only proposed for single input single output systems, which limit their applications. Similarly, a minimum variance control scheme in \cite{mitic2004} was presented where a quasi-sliding mode with $O(T^3)$ accuracy was achieved for single-input single-output systems. In \citep{nguyen09b,Nguyen10,Nguyen2016}, output feedback sliding mode control schemes were proposed for multi-input multi-output systems to achieve quasi-sliding motion with boundary layers of $O(T^2)$ and $O(T^3)$ respectively. However, the control signals in \citep{nguyen09b,Nguyen10,Nguyen2016} are of order $O(1/T)$, which can be detrimental to system hardware such as actuators during transients or in the presence of disturbances. Moreover, these (effectively) high gain controllers can be sensitive to measurement noise, which deteriorates the control performance. In this paper, improved versions of the control schemes in \citep{nguyen09b,Nguyen2016} are proposed to avoid possible high gain control efforts. Our paper exploits sampled-data predictors to estimate disturbances. The contributions of the paper are: \begin{itemize} \item[i)] to provide a control technique to reduce the high-gain control effect during transient while maintaining a certain level of desired performance. \item[ii)] to provide theoretical analysis of the proposed scheme. Note that the preliminary results in \cite{nguyen2017improvement} still omit complete theoretical analysis due to space reason. \item[iii)] to evaluate the performance of the proposed scheme across different cases. Unlike the conference version in \cite{nguyen2017improvement}, in this paper, we consider the influence of noise on the new method. \end{itemize} Note that in the conference version \cite{nguyen2017improvement}, the disturbance and its first and second derivatives are required to be bounded. Meanwhile, in this paper, we consider a more general case in which only the disturbance and its first derivative are bounded. In this paper, $\lambda\{A\}$ represents the spectrum of the matrix $A$, while $I_m$ is the identity matrix of order $m$. A vector function $f(t,s)\in R^n$ is said to be $O(s)$ over an interval $[t_1,t_2]$, if there exist positive constants $K$ and an $s^*$ such that $\|f(t,s)\|\leq Ks, \quad \forall s\in[0,s^*],\quad \forall t\in[t_1,t_2]$ \citep{kok86}. Throughout the paper, $f[k]$ stands for $f(kT)$, where $k=0, 1, 2, ...$ describes the index of the discrete-time sequence. The paper is organized as follows. Section II presents the formulation of the problem. The main results are described in Section III. Simulation results are implemented to illustrate the efficacy of the proposed schemes in Section IV. The final section offers some conclusions. \section{PROBLEM FORMULATION} Consider the following system \begin{eqnarray}\label{a-0} \dot{x}(t)&=&Ax(t)+B(u(t)+f(t))\\ \nonumber y(t)&=&Cx(t), \end{eqnarray} where $x(t)\in R^n$ is the system state, $u(t)\in R^m$ is the system control input, $y(t)\in R^p$ is the system output, $f(t)\in R^m$ is an unknown bounded external disturbance, with $m\leq p<n$. A switching function based on output information will be considered: \begin{equation}\label{a} s=Hy. \end{equation} \newtheorem{assumption}{Assumption} \begin{assumption}\label{as1} The disturbance $f(t)$ and its first derivative are bounded. \end{assumption} \begin{assumption}\label{as2} The disturbance $f(t)$ and its first and second derivatives are bounded. \end{assumption} \begin{assumption}\label{as3} There exists a full rank $m \times p$ matrix $H$ such that the square matrix $HCB$ is invertible and the continuous-time sliding surface, $s(t)=0$, is a legitimate design in the sense that the reduced order motion is stable \citep{edspu95}. \end{assumption} \begin{remark} According to \citep{edspu95}, if system (\ref{a-0}) has relative degree equal to one with stable invariant zeros and $B$ and $C$ have full rank \citep{edspu95}, then Assumption \ref{as3} is satisfied. A method to design matrix $H$ can be based on the framework in \citep{edspu95}. \end{remark} \begin{remark} In this paper, we consider a more general class of disturbances, which only requires the boundedness of the disturbance and its first derivative. In Assumption \ref{as2}, which is used in \cite{nguyen09b,Nguyen2016,nguyen2017improvement}, the disturbance and its first and second derivatives are bounded. The disturbance considered in \cite{su,abxu07} is smooth. \end{remark} The sampled-data version of (\ref{a-0}) is \begin{align}\label{b-3a} x[k+1]=&\Phi x[k]+\Gamma u[k]+d[k]\\ \nonumber y[k]=&Cx[k], \end{align} where \begin{eqnarray} \Phi&=&e^{AT}=\sum_{k=0}^\infty \frac{(TA)^k}{k!},\\ \Gamma&=&\int_0^T e^{A\tau} d\tau B=\int_0^T\sum_{k=0}^\infty \frac{(\tau A)^k}{k!}Bd\tau, \end{eqnarray} and in (\ref{b-3a}) the disturbance is \begin{equation}\label{dk} d[k]=\int_0^T e^{A\tau}Bf((k+1)T-\tau)d\tau. \end{equation} Define \begin{eqnarray} \bar{A}&=&\frac{1}{T}(\Phi-I_n),\\ \bar{\bar{A}}&=&\frac{1}{T^2}(\Phi-I_n-TA)=\sum_{k=2}^{\infty}T^{k-2}\frac{A^k}{k!}=O(1),\\ \bar{B}&=&\frac{\Gamma}{T},\\ \bar{\bar{B}}&=&\frac{1}{T^2}(\Gamma-TB). \end{eqnarray} With the above definitions, the system matrices of the discrete-time system (\ref{b-3a}) satisfy \begin{eqnarray} \Phi&=&I_n+T\bar{A}=I_n+T(A+T\bar{\bar{A}}) \label{phi}\\ \Gamma &=&T\bar{B}=T(B+T\bar{\bar{B}}) \label{gammaB}. \end{eqnarray} Due to the sampling effect, the disturbance $d[k]$ in the sampled-data system contains unmatched components: details of its properties are described in the following lemmas. \begin{lemma}\label{l1} If Assumption \ref{as1} holds, then \begin{subequations}\label{lem-1} \begin{align} d[k]&=\Gamma f[k]+d^\prime[k],\label{dk_lem1}\\ d[k]&-d[k-1]=O(T^2) \label{dk2_lem1} \end{align} \end{subequations} where \begin{equation}\label{dpk_l1} d^\prime[k]=\int_0^T e^{A\tau}B\int_{kT}^{(k+1)T-\tau}v(\beta)d\beta d\tau=O(T^2), \end{equation} \begin{equation} v(t)=df(t)/dt. \end{equation} \end{lemma} \begin{proof} Consider $0\leq\tau<T$ and express $f((k+1)T-\tau)$ as \begin{equation}\label{f_lem1} f((k+1)T-\tau)=f[k]+\int_{kT}^{(k+1)T-\tau}v(\beta)d\beta. \end{equation} Substituting (\ref{f_lem1}) into (\ref{dk}), we obtain \begin{eqnarray} \nonumber d[k]&=&\int_0^T e^{A\tau}B(f[k]+\int_{kT}^{(k+1)T-\tau}v(\beta)d\beta)d\tau\\ \nonumber &=&\int_0^T e^{A\tau}Bf[k]d\tau+d^\prime[k]\\ &=&\Gamma f[k]+d^\prime[k]. \end{eqnarray} Assume $v(t)$ is bounded by $V$, namely $v(t)\leq V$. We have \begin{eqnarray} \nonumber \|d^\prime[k]\|&\leq& \|\int_0^T e^{A\tau}B\int_{kT}^{(k+1)T-\tau}v(\beta)d\beta d\tau\| \\ \nonumber &\leq& \|\int_0^T e^{A\tau}B(T-\tau) d\tau\|\\ \nonumber &\leq& \|\int_0^T e^{A\tau}B T d\tau\|\\ &=&T\Gamma=O(T^2). \end{eqnarray} We have \begin{eqnarray} \nonumber d[k]-d[k-1]&=&\Gamma (f[k]-f[k-1])+(d^\prime[k]-d^\prime [k-1]\\ &=&\Gamma \int_{kT}^{(k+1)T}v(t) dt+(d^\prime[k]-d^\prime [k-1]. \end{eqnarray} Since \begin{equation} \|\int_{kT}^{(k+1)T}v(t) dt\|\leq \int_{kT}^{(k+1)T}V dt=TV=O(T), \end{equation} $\Gamma=O(T)$, and $d^\prime[k]=O(T^2)$, \begin{equation} \nonumber d[k]-d[k-1]=O(T^2). \end{equation} \end{proof} The following lemma was employed in \citep{abxu07,nguyen09b,Nguyen10,Nguyen2016,nguyen2017improvement}. \begin{lemma}\label{l2} If Assumption \ref{as2} holds, then \begin{subequations}\label{lem-2} \begin{align} d[k]&=\Gamma f[k]+d^\prime[k] \label{dk_lem2}\\ d[k]&-d[k-1]=O(T^2), \label{dk2_lem2}\\ d[k]&-2d[k-1]+d[k-2]=O(T^3), \label{dk3_lem2} \end{align} \end{subequations} where \begin{eqnarray} \label{dpk_l2} d^\prime[k]&=&\frac{T}{2}\Gamma v[k]+T^3\Delta d[k]=O(T^2),\\ \label{delta_dk} \nonumber\Delta d[k]&=&\hat{M}v[k]+\frac{1}{T^3}\int_0^T e^{A\tau}B\int_{kT}^{(k+1)T-\tau}\int_{kT}^{\beta}\dot{v}(\sigma)d \sigma d\beta d\tau\\ &=&O(1), \end{eqnarray} Note that in (\ref{dk}) \begin{equation} \hat{M}=(-\frac{1}{12}A-\frac{T}{12}\bar{\bar{A}})B=O(1). \end{equation} \end{lemma} \begin{proof} The proof is presented in \cite{nguyen09b}. \end{proof} \begin{remark} A discrete-time model can be derived using the delta operator in \citep{Middleton_TAC_1986} from which a switching sliding mode control scheme is proposed to address state feedback control for a discrete-time system subject to an external disturbance \citep{Kumari_2016_IECON}. Meanwhile, our control law is non-switching in the context of output feedback. The problem in \citep{Kumari_2016_IECON} is suitable for fast sampling rates with more simple assumptions; i.e., there are no unmatched disturbance components in the discrete-time model. Furthermore, our problem is more general in the sense that it is not limited to fast sampling. \end{remark} Following the derivation in \citep{Nguyen2016}, we employ the following nonsingular transformation matrix \begin{equation}\label{P-1} P_1=\left[\begin{matrix}M\\HC\end{matrix}\right], \end{equation} where $M\in R^{(n-m)\times n}$ and $MB=0$, which implies $\textrm{Range}(M^T)=\textrm{Null}(B^T)$. As demonstrated in \citep{Nguyen2016}, $P_1$ has full rank. Let the inverse of $P_1$ be partitioned as \begin{equation}\label{invP1} P_1^{-1}=\left[\begin{matrix}Q&R\end{matrix}\right] \end{equation} where $Q$ has $n-m$ columns. Let $\left[\begin{matrix}\xi^T&s^T\end{matrix}\right]^T=P_1x$, then in the new coordinates \begin{equation}\label{a-2} \left[\begin{matrix}\dot{\xi}\\\dot{s}\end{matrix}\right]=\left[\begin{matrix}MAQ&MAR\\HCAQ&HCAR\end{matrix}\right]\left[\begin{matrix}\xi\\star\end{matrix}\right] +\left[\begin{matrix}0\\HCB\end{matrix}\right](u+f). \end{equation} This is in ``normal form'', which implies that the sliding mode dynamics of system (\ref{a-2}) is \begin{equation}\label{Ac} \dot{\xi}=MAQ\xi=A_c\xi \end{equation} where the eigenvalues of matrix $A_c$ contains any invariant zeros of (\ref{a-0}), \citep{edspu95}. Now, consider the sampled-data version of the continuous-time system in (\ref{a-0}): \begin{align}\label{b-3} \nonumber x[k+1]=&\Phi x[k]+\Gamma u[k]+d[k]\\ y[k]=&Cx[k]\\ \nonumber s[k]=&Hy[k], \end{align} where the output feedback sliding vector is prescribed in (\ref{a}). The control methods in \citep{nguyen09b,Nguyen2016} designed for system (\ref{b-3}) can exhibit high gain transients of the order of $O(1/T)$ when the system state is far from the sliding surface. In this paper, our objective is to provide a solution to this high gain problem such that the control efforts are $O(1)$, but a certain level of accuracy of sliding mode is still guaranteed. \section{MAIN RESULTS} In this section, the improved versions of the schemes in \citep{nguyen09b,Nguyen2016} will be proposed. For convenience, we call the method in \citep{nguyen09b} Method~1 (M1), and the one in \citep{Nguyen2016} Method 2 (M2). Using (\ref{phi}), (\ref{gammaB}), (\ref{dk_lem1}), and (\ref{P-1}), the system in (\ref{b-3}) is \begin{align}\label{b-4} \nonumber\left[\begin{matrix}\xi[k+1]\\star[k+1]\end{matrix}\right]=&\left[\begin{matrix}I_{n-m}+TM\bar{A}Q&TM\bar{A}R\\THC\bar{A}Q&I_m+THC\bar{A}R\end{matrix}\right]\left[\begin{matrix}\xi[k]\\star[k]\end{matrix}\right]\\ &+\left[\begin{matrix}TM\bar{B}\\THC\bar{B}\end{matrix}\right]u[k]+\left[\begin{matrix}d_{11}[k]\\d_{12}[k]\end{matrix}\right] \end{align} where \begin{eqnarray} d_{11}[k]&=&T^2M\bar{\bar{B}} f[k]+M d^\prime[k]=O(T^2), \label{d11}\\ d_{12}[k]&=&THC\bar{B} f[k]+HCd^\prime[k]=O(T), \label{d12} \end{eqnarray} since $\bar{B}=B+T\bar{\bar{B}}$, $MB=0$, $M\bar{B}=O(T)$, and $d^\prime[k]=O(T^2)$ according to Lemma \ref{l1}. The $s[k]$ dynamics in (\ref{b-4}) can be written as \begin{equation}\label{ss} s[k+1]=(I_m+T\Omega_2)s[k]+T HC\bar{B}u[k]+g[k], \end{equation} where \begin{eqnarray}\label{gk} g[k]&=&T\Omega_1\xi[k]+d_{12}[k],\\ \label{om1}\Omega_1&=&HC\bar{A}Q,\\ \label{om2} \Omega_2&=&HC\bar{A}R. \end{eqnarray} As in \citep{ud89}, solving for $s[k+1]=0$ yields the discrete-time equivalent control law \begin{equation}\label{ueq} u^{eq}[k]=-\frac{1}{T}(HC\bar{B})^{-1}((I_m+T\Omega_2)s[k]+g[k]), \end{equation} which is not physically implementable since it contains $g[k]$, which is unknown at time instant $k$ \cite{Nguyen2016}. The cause of the high gain phenomenon stems from the fact that the control laws are designed to force $s[k+1]=0$. To mitigate this problem, we design a control law such that \begin{equation}\label{sk} s[k+1]=\alpha s[k] \end{equation} where $|\alpha|<1$ is a design parameter. Solving (\ref{sk}) for the equivalent control law, \begin{equation}\label{ueq2} u[k]=-\frac{1}{T}(HC\bar{B})^{-1}(((1-\alpha)I_m+T\Omega_2)s[k]+g[k]). \end{equation} \begin{remark} According to Assumption \ref{as3}, $HCB$ is nonsingular. Since $\bar{B}=B+O(T)$ by construction, there is a small enough $T$ such as $HC\bar{B}$ is invertible. This was proved in \citep{Nguyen2016}. \end{remark} When the system state is not close to the origin such that $\xi[k]=O(1)$ and $s[k]=O(1)$, the expression in (\ref{gk}) implies that $g[k]=O(T)$. Choose $\alpha\in(0,1)$ such that \begin{equation}\label{beta} \beta\triangleq\frac{1-\alpha}{T}=O(1). \end{equation} The expression in (\ref{gk}) shows that $g[k]=O(T)$. From (\ref{beta}), $1-\alpha=\beta T$ and thus, the equivalent control $u[k]$ in (\ref{ueq2}) is $O(1)$. Note that $g[k]$ is unknown at time instant $[k]$. In the following, we will present methods to approximate $g[k]$ to obtain a physically realiseable control law. \subsection{Development of a Modified Version of Method 1} \label{c1} In this subsection, we consider the case when Assumption \ref{as1} holds. From (\ref{d12}) and (\ref{gk}), $g[k]$ contains $f[k]$, which can be approximated by $f[k-1]$ due to the continuity and boundedness properties of $f(t)$ and its first derivative. Here, $\|f[k]-f[k-1]\|=\|\int_{(k-1)T}^{kT}v(\beta)d\beta\|\leq \|\int_{(k-1)T}^{kT}V d\beta\|=TV=O(T)$ where $\|v(t)\|\leq V$. As shown in \citep{nguyen09b}, $g[k]$ can be approximated by $g[k-1]$ which is computed from (\ref{ss}) as \begin{equation}\label{d-1b} g[k-1]=s[k]-(I_m+T\Omega_2)s[k-1]-THC\bar{B}u[k-1]. \end{equation} Hence, using $g[k-1]$ for $g[k]$ in (\ref{ueq2}) yields the expression \begin{equation}\label{mu1} u[k]=-\frac{1}{T}(HC\bar{B})^{-1}(((1-\alpha)I_m+T\Omega_2)s[k]+g[k-1]) \end{equation} From (\ref{d-1b}) and (\ref{mu1}), \begin{eqnarray}\label{mu1b} \nonumber u[k]&=&-\frac{1}{T}(HC\bar{B})^{-1}(((2-\alpha)I_m+T\Omega_2)s[k]\\ &&-(I_m+T\Omega_2)s[k-1])+u[k-1]. \end{eqnarray} From (\ref{beta}), $1-\alpha=\beta T$ and (\ref{mu1}) \begin{eqnarray} \nonumber u[k]&=&-\frac{1}{T}(HC\bar{B})^{-1}((\beta T I_m+T\Omega_2)s[k]+g[k-1])\\ \nonumber &=&-(HC\bar{B})^{-1}((\beta I_m+\Omega_2)s[k] +\frac{g[k-1]}{T})\\ &=&O(1).\label{mu1c} \end{eqnarray} Next, we study the stability of the closed-loop system under the control law (\ref{mu1b}) in the absence of external disturbances. Let \begin{equation} \label{psi1} \psi_1[k]=\left[\begin{matrix} \xi[k]\\ s[k]\\ \gamma[k]\end{matrix}\right], \end{equation} where \begin{equation} \label{u-1} \gamma[k]=T HC\bar{B}u[k]. \end{equation} From (\ref{ss}), (\ref{gk}), (\ref{mu1b}) and (\ref{u-1}), \begin{eqnarray}\label{gamma_dynamics} \nonumber \gamma[k+1]&=&-((2-\alpha)I_m+T\Omega_2)s[k+1]+(I_m+T\Omega_2)s[k])+\gamma[k]\\ \nonumber &=&-((2-\alpha)I_m+T\Omega_2)((I_m+T\Omega_2)s[k]+\gamma[k]+g[k])\\ \nonumber && +(I_m+T\Omega_2)s[k])+\gamma[k]\\ \nonumber &=&-((1-\alpha)I_m+T\Omega_2)((I_m+T\Omega_2)s[k]\\ \nonumber &&-((1-\alpha)I_m+T\Omega_2)\gamma[k]\\ &&-((2-\alpha)I_m+T\Omega_2)(T\Omega_1\xi[k]+d_{12}[k]). \end{eqnarray} From (\ref{b-4}), (\ref{psi1}), and (\ref{gamma_dynamics}), the dynamics of the closed-loop system utilizing the control law (\ref{mu1b}) is described by the augmented system \begin{equation}\label{aug} \psi_1[k+1]=A_{aug1} \psi_1[k]+d_2[k], \end{equation} where the system matrix \begin{equation}\label{A-aug1} A_{aug1}=\left[\begin{matrix}A_s&TN_1\\TN_2&A_{e1}\end{matrix}\right] \end{equation} and the sub-matrix \begin{equation}\label{As} A_s=I_{n-m}+TM\bar{A}Q=I_{n-m}+TA_c+T^2M\bar{\bar{A}}Q \end{equation} with $A_c$ as given in (\ref{Ac}), and \small \begin{align}\label{Ae1} A_{e1}= \left[\begin{matrix}(I_m+T\Omega_2)&I_m\\ -((1-\alpha)I_m+T\Omega_2)(I_m+T\Omega_2)&-((1-\alpha)I_m+T\Omega_2)\end{matrix}\right]. \end{align} The (augmented) disturbance term in system (\ref{aug}) is \begin{equation}\label{d-2} d_2[k]=\left[\begin{matrix}d_{11}[k]\\ d_{12}[k]\\-((2-\alpha)I_m+T\Omega_2)d_{12}[k]\end{matrix}\right], \end{equation} and the off-diagonal matrices in (\ref{A-aug1}) are \begin{eqnarray} \nonumber N_1&=&[M\bar{A}R\quad M\bar{\bar{B}}(HC\bar{B})^{-1}],\\ \nonumber N_2&=&\left[\begin{matrix}\Omega_1\\-((2-\alpha)I_m+T\Omega_2)\Omega_1\end{matrix}\right]. \end{eqnarray} Before demonstrating stability of the closed-loop system in the absence of disturbances, we need the following: \begin{lemma}\label{eigAe1} The eigenvalues of $A_{e1}$ are $\alpha$ and $0$. \end{lemma} \noindent{\bf{\em Proof:}\ \ } Using column operations, \begin{eqnarray} \nonumber\lambda \{A_{e1}\}&=&\det\Big[{\begin{smallmatrix} \lambda I_m-(I_m+T\Omega_2)&-I_m\\ ((1-\alpha)I_m+T\Omega_2)(I_m+T\Omega_2)&\lambda I_m+((1-\alpha)I_m+T\Omega_2)\end{smallmatrix} \Big]}\\ \nonumber&=&(-1)^m\det\Big[{\begin{smallmatrix} -I_m&\lambda I_m-(I_m+T\Omega_2)\\ \lambda I_m+((1-\alpha)I_m+T\Omega_2)&((1-\alpha)I_m+T\Omega_2)(I_m+T\Omega_2)\end{smallmatrix} \Big]}\\ \nonumber &=&(-1)^m\det [((1-\alpha)I_m+T\Omega_2)(I_m+T\Omega_2)+(\lambda I_m\\ \nonumber&& \quad+((1-\alpha)I_m+T\Omega_2))(\lambda I_m-(I_m+T\Omega_2))]\\ &=& (-1)^m\det[ \lambda (\lambda-\alpha)I_m]. \end{eqnarray} This proves the lemma. \hspace*{\fill}~\QED\par\endtrivlist\unskip Since $0<\alpha<1$, the eigenvalues of $A_{e1}$ lie in the unit circle and we have the following theorem. \begin{theorem}\label{th1} Suppose Assumption \ref{as1} holds. In the absence of disturbances, under the discrete-time output feedback control law (\ref{mu1b}), the sampled-data system (\ref{b-4}) is asymptotically stable if the sampling period $T$ is small enough. \end{theorem} \noindent{\bf{\em Proof:}\ \ } Using similar arguments to those in \citep{kato1995}, the eigenvalues of $A_{aug1}$ are \begin{eqnarray} \lambda_1&=&\lambda \{A_s+O(T^2)\}\\ \lambda_2&=&\lambda \{A_{e1}+O(T^2)\}. \end{eqnarray} Since $A_c$ contains the stable eigenvalues associated with the zero dynamics of the original continuous-time sliding motion in (\ref{Ac}), $\lambda\{A_s\}$ lie in the unit circle. Hence, according to \citep{Nguyen2016}, the eigenvalues of $A_{aug1}$ lie in the unit circle for sufficiently small $T$, which implies the stability of the closed-loop system. \hspace*{\fill}~\QED\par\endtrivlist\unskip Next, the accuracy of the quasi-sliding motion of the system under the proposed control law (\ref{mu1b}) in the presence of the external disturbance will be studied. Under the control law (\ref{mu1}), \begin{eqnarray}\label{sk2} \nonumber s[k+1]&=&\alpha s[k]+g[k]-g[k-1]\\ \nonumber &=&\alpha s[k]+T\Omega_1 (\xi[k]-\xi[k-1])\\ &&+d_{12}[k]-d_{12}[k-1]. \end{eqnarray} Since $d_{12}[k]-d_{12}[k-1]= O(T^2)$ \citep{Nguyen2016}, \begin{equation} s[k+1]=\alpha s[k]+T^2 M\bar{A}R s[k-1] +O(T^2) \end{equation} At steady state, $s[k+1]\approx s[k]$ and \begin{equation} (1-\alpha-T^2 M\bar{A}R ) s[k] = O(T^2) \end{equation} or \begin{equation} s[k] =(\beta I_m-T M\bar{A}R )^{-1} O(T)=O(T). \end{equation} Similarly, at steady state, $\xi[k+1]\approx \xi[k]$ and from (\ref{b-4}), \begin{equation} \xi[k]= (TM\bar{A}Q)^{-1} O(T^2)=O(T). \end{equation} Therefore, \begin{equation} x[k]=P_1^{-1}\left[ \begin{matrix}\xi[k]\\ s[k]\end{matrix}\right]=O(T). \end{equation} The above analysis is summarized in the following theorem. \begin{theorem}\label{th2} Under Assumptions \ref{as1} and \ref{as3}, the sampled-data output feedback control law (\ref{mu1b}) produces a quasi-sliding motion about the sliding surface $s(t)$ with an $O(T)$ boundary layer and an ultimate bound of $O(T)$ on the original state variables. Furthermore, the control input is guaranteed to be $O(1)$ when the initial state variables are such that $s[0]=O(1)$. \end{theorem} \begin{remark} The proposed control contains no switching actions, thereby avoiding chattering phenomena. On the other hand, it is observed that control law (\ref{mu1b}) is not able to completely compensate disturbance $g(k)$. However, by taking into account the past information, control law (\ref{mu1b}) still provides the closed-loop system with certain characteristics to reduce the influence of external disturbances. \end{remark} \subsection{Development of a Modified Version of Method 2} \label{c2} In this subsection, we consider the case when Assumption \ref{as2} holds. We have \begin{equation} f[k+1]=f[k]+v[k]T+f^\prime[k], \end{equation} where \begin{equation} f^\prime[k]=\int_{kT}^{(k+1)T}\int_{kT}^\beta \dot{v}(\sigma)d\sigma d\beta. \end{equation} Since the second derivative of $f(t)$ is bounded, assume that $\|\dot{v}\|\leq W=O(1)$. Hence, \begin{equation} \|\int_{kT}^{(k+1)T}\int_{kT}^\beta \dot{v}(\sigma)d\sigma d\beta\|\leq T(\beta-kT)W=O(T^2) \end{equation} for $kT\leq \beta\leq (k+1)T$. Thus, $f^\prime[k]=O(T^2)$. We have \begin{eqnarray} \nonumber &&\|f[k]-2f[k-1]+f[k-2]\|\\ \nonumber &=&\|T(v[k-1]-v[k-2])+f^\prime[k-1]-f^\prime[k-2]\|\\ \nonumber &=&\|T\int_{(k-2)T}^{(k-1)T} \dot{v}(\sigma)d\sigma\|\\ &\leq&T^2W=O(T^2). \end{eqnarray} Therefore, $f[k]$ can be approximated by $2f[k-1]-f[k-2]$. Due to the expressions in (\ref{d12}) and (\ref{gk}), $g[k]$ can be approximated by $2g[k-1]-g[k-2]$. This approximation was also employed in the control law presented in \citep{Nguyen2016}. The expression in (\ref{d12}) also implies that \begin{equation}\label{d12_TO2} d_{12}[k]-2_{12}[k-1]+_{12}[k-2]=O(T^3). \end{equation} In the equivalent control law (\ref{ueq2}), replacing $g[k]$ by $2g[k-1]-g[k-2]$ yields \begin{eqnarray}\label{mu2} \nonumber u[k]&=&-\frac{1}{T}(HC\bar{B})^{-1}(((1-\alpha)I_m+T\Omega_2)s[k]\\ &&+2g[k-1]-g[k-2]). \end{eqnarray} Using (\ref{d-1b}), the control law in (\ref{mu2}) is \begin{eqnarray}\label{mu2b} \nonumber u[k]&=&-\frac{1}{T}(HC\bar{B})^{-1}(((3-\alpha)I_m+T\Omega_2)s[k]\\ \nonumber&& -(3I_m+2T\Omega_2)s[k-1]+(I_m+T\Omega_2)s[k-2])\\ &&+2u[k-1]-u[k-2]. \end{eqnarray} Using the same argument as in Subsection \ref{c1}, from (\ref{mu2}), we obtain \begin{eqnarray}\label{mu2c} \nonumber u[k]&=&-(HC\bar{B})^{-1}((\beta I_m+\Omega_2)s[k]+\frac{2g[k-1]-g[k-2]}{T})\\ &=&O(1). \end{eqnarray} As in Subsection \ref{c1}, we employ the following \begin{eqnarray}\label{ss-3} s_1[k]&=&s[k-1],\\ \label{u-1b} \gamma[k]&=&T HC\bar{B}u[k],\\ \label{u-2} \gamma_1[k]&=&T HC\bar{B}u[k-1]. \end{eqnarray} Let \begin{equation} \psi_2=\left[\begin{matrix}\xi[k]\\star[k]\\star_1[k]\\\gamma[k]\\\gamma_1[k]\end{matrix}\right] \end{equation} then the dynamics of the extended system is \begin{equation}\label{aug2} \psi_2[k+1]=A_{aug2} \psi_2[k]+d_3[k], \end{equation} where \begin{equation}\label{A-aug} A_{aug2}=\left[\begin{matrix}A_s&TN_3\\TN_4&A_{e2}\end{matrix}\right], \end{equation} the sub-matrix $A_s$ is given in (\ref{As}), and \begin{equation}\label{Ae2} A_{e2}= \left[\begin{smallmatrix}(I_m+T\Omega_2)&0&I_m&0\\mbox{\BB I}_m&0&0&0\\ -(-\alpha I_m+(2-\alpha)T\Omega_2+T^2\Omega_2^2)&-(I_m+T\Omega_2)&-((1-\alpha)I_m+T\Omega_2)&-I_m\\0&0&I_m&0\end{smallmatrix}\right]. \end{equation} The (augmented) disturbance term in system (\ref{aug2}) is \begin{equation}\label{d-3} \nonumber d_3[k]=\left[\begin{matrix}d^T_{11}[k]& d^T_{12}[k]&0&-d^T_{12}[k]((3-\alpha)I_m+T\Omega_2)^T&0\end{matrix}\right]^T, \end{equation} and the off-diagonal matrices in (\ref{A-aug}) are \begin{eqnarray} \nonumber N_3&=&[M\bar{A}R\quad 0_{(n-m)\times m}\quad M\bar{\bar{B}}(HC\bar{B})^{-1}\quad 0_{(n-m)\times m}],\\ \nonumber N_4&=&\left[\begin{matrix}\Omega_1\\0\\-((3-\alpha)I_m+T\Omega_2)\Omega_1\\0\end{matrix}\right]. \end{eqnarray} As argued in Subsection \ref{c1}, the following results are obtained. \begin{lemma}\label{eigAe2} The eigenvalues of $A_{e2}$ are $\alpha$ and $0$. \end{lemma} \noindent{\bf{\em Proof:}\ \ } Let \begin{equation} X=\left[\begin{smallmatrix}(\lambda-1)I_m-T\Omega_2&0&-I_m&0\\-I_m&\lambda I_m&0&0\\ (-\alpha I_m+(2-\alpha)T\Omega_2+T^2\Omega_2^2)&(I_m+T\Omega_2)&(\lambda+1-\alpha)I_m+T\Omega_2&I_m\\0&0&-I_m&\lambda I_m\end{smallmatrix}\right] \end{equation} Then \begin{equation} \lambda \{A_{e2}\}=\det X. \end{equation} Let \begin{eqnarray} \nonumber X_{11}&=&(\lambda-1)I_m-T\Omega_2,\\ \nonumber X_{31}&=&-\alpha I_m+(2-\alpha)T\Omega_2+T^2\Omega_2^2,\\ \nonumber X_{32}&=&I_m+T\Omega_2,\\ \nonumber X_{33}&=&(\lambda+1-\alpha)I_m+T\Omega_2. \end{eqnarray} Furthermore, let \begin{equation} J_1= \left[\begin{matrix}I_m&0&0&0\\mbox{\BB I}_m&X_{11}&0&0\\ 0&0&I_m&0\\0&0&0&I_m\end{matrix}\right], \end{equation} \begin{equation} J_2= \left[\begin{matrix}I_m&0&0&0\\0&I_m&0&0\\ -X_{31}&0&X_{11}&0\\0&0&0&I_m\end{matrix}\right], \end{equation} \begin{equation} J_3= \left[\begin{matrix}I_m&0&0&0\\0&I_m&0&0\\ 0&-X_{11}X_{32}&\lambda X_{11}&0\\0&0&0&I_m\end{matrix}\right], \end{equation} \begin{equation} J_4= \left[\begin{matrix}I_m&0&0&0\\0&I_m&0&0\\ 0&0&\lambda X_{11}&0\\0&0&I_m&Y_{33}\end{matrix}\right], \end{equation} where \begin{equation} Y_{33}=X_{11}X_{32}+\lambda X_{11}(X_{11} X_{33}+X_{31}). \end{equation} Then we have \begin{equation} J_4J_3J_2J_1X=\left[\begin{smallmatrix}X_{11}&0&-I_m&0\\0&\lambda X_{11}&-I_m&0\\ 0&0&Y_{33}&\lambda X_{11}^2\\0&0&0&\lambda Y_{33}+\lambda X_{11}^2\end{smallmatrix}\right]. \end{equation} Thus, \begin{equation} \det(J_4J_3J_2J_1X)=\det(X_{11})\det(\lambda X_{11})\det(Y_{33})\det(\lambda Y_{33}+\lambda X_{11}^2). \end{equation} Since \begin{equation} \det(\lambda Y_{33}+\lambda X_{11}^2)=\det(\lambda^3(\lambda-\alpha)X_{11}), \end{equation} \begin{equation} \det(X)=\lambda^{3m}(\lambda-\alpha)^m. \end{equation} This proves the lemma. \hspace*{\fill}~\QED\par\endtrivlist\unskip \begin{theorem}\label{th1b} Suppose Assumption \ref{as2} holds. In the absence of disturbances, under the discrete-time output feedback control law (\ref{mu2b}), the sampled-data system (\ref{b-4}) is asymptotically stable if the sampling period $T$ is small enough. \end{theorem} \noindent{\bf{\em Proof:}\ \ } Using the results in perturbation theory for linear operators in \citep{kato1995}, the eigenvalues of $A_{aug2}$ are \begin{eqnarray} \lambda_1&=&\lambda \{A_s+O(T^2)\}\\ \lambda_2&=&\lambda \{A_{e2}+O(T^2)\}. \end{eqnarray} According to Lemma \ref{eigAe2}, the eigenvalues of $A_{e2}$ are $\alpha$ and 0, which lie in the unit circle. In addition, the eigenvalues of $A_s$ also are in the unit circle. Hence, similar to the proof of Theorem \ref{th1}, for sufficiently small $T$, the eigenvalues of $A_{aug2}$ lie in the unit circle, which implies the stability of the closed-loop system. \hspace*{\fill}~\QED\par\endtrivlist\unskip \begin{theorem}\label{th2b} Suppose Assumptions \ref{as2} and \ref{as3} hold. In the presence of the external disturbance, the sampled-data output feedback control law (\ref{mu2b}) produces a quasi-sliding motion on the sliding surface $s(t)$ with an $O(T^2)$ boundary layer and an ultimate bound of $O(T)$ on the original state variables. Furthermore, the control input is guaranteed to be $O(1)$ if the initial state variables are such that $s[0]=O(1)$. \end{theorem} \noindent{\bf{\em Proof:}\ \ } Under the control law (\ref{mu2}), \begin{eqnarray}\label{sk2b} \nonumber s[k+1]&=&\alpha s[k]+g[k]-2g[k-1]+g[k-2]\\ \nonumber &=&\alpha s[k]+T\Omega_1 (\xi[k]-2\xi[k-1]+\xi[k-2])\\ &&+d_{12}[k]-2d_{12}[k-1]+d_{12}[k-2]. \end{eqnarray} Due to (\ref{d12_TO2}) and at steady state, $\xi[k+1]\approx \xi[k]$ , $s[k+1]\approx s[k]$, we obtain \begin{equation} s[k+1]=\alpha s[k]+O(T^3). \end{equation} This implies \begin{equation} s[k] = \frac{1}{1-\alpha}O(T^3)=O(T^2) \end{equation} since $1-\alpha=O(T)$. Similarly, at steady state, $\xi[k+1]\approx \xi[k]$ and from (\ref{b-4}), \begin{equation} \xi[k]= (TM\bar{A}Q)^{-1} O(T^2)=O(T). \end{equation} Therefore, \begin{equation} x[k]=P_1^{-1}\left[ \begin{matrix}\xi[k]\\ s[k]\end{matrix}\right]=O(T). \end{equation} \hspace*{\fill}~\QED\par\endtrivlist\unskip \begin{remark} The quasi-sliding motions under the control laws (\ref{mu1}), (\ref{mu2b}) possess less accuracy than the ones in \citep{nguyen09b,Nguyen2016}, which are $O(T^2)$ and $O(T^3)$ respectively. (However, the bounds on the state variables are similar). The advantage of the proposed schemes is that the resulting control efforts are able to operate at a less demanding mode than their original counterparts in \citep{nguyen09b,Nguyen2016}. \end{remark} \begin{remark} When $s[k]=O(1)$, the control laws in (\ref{mu1b}) and (\ref{mu2b}) are $O(1)$ as explained in (\ref{mu1c}) and (\ref{mu2c}) respectively. In contrast, the methods in \citep{nguyen09b,Nguyen2016} provide $O(1/T)$ control effort, which results in undesired high gain control. This shows the advantage of the proposed method in this paper. \end{remark} \section{SIMULATION} In this section, for comparison we employ the same system as in \citep{Nguyen2016}, which is the lateral dynamics of an aircraft \cite{Srina78}. Its system matrices are given as \begin{eqnarray} \nonumber A&=&\left[\begin{matrix}-3.79& 0.04 &-52 &0\\-0.14& -0.36 &4.24 &0\\0.06& -1 &-0.27 &0.05\\1 &0.06 &0 &0 \end{matrix}\right], \\ \nonumber B&=&\left[\bmx25& 9.83\\1.42& -4.2\\0.01 &0.05\\0& 0\end{matrix}\right],\\ \nonumber C&=&\left[\bmx1&0&0&0\\0&1&0&0\\0&0&0&1\end{matrix}\right]. \end{eqnarray} The invariant zero of the system is -0.1796. The matrix $H$ is \begin{equation} \nonumber H=\left[\begin{matrix}0.035306& 0.082634 &0.076550\\0.011937& -0.210157 & 0.008324\end{matrix}\right], \end{equation} which is constructed such that the assignable eigenvalue for the sliding mode is $-2$, \citep{Nguyen2016}. The initial condition is given by $x(0)=[-1,2,1,-2]^T$. The sampling period is $T=0.01s$. The disturbance vector is defined as \begin{equation} f(t)=\begin{cases} \left[\begin{matrix}0\\0\end{matrix}\right], & \text{for } t\geq 0\\ \left[\begin{matrix}2\\-0.5\end{matrix}\right], & \text{for } 10\leq t<5\pi\\ \left[\begin{matrix}1+\sin(0.5t)\\0.5\cos(t)\end{matrix}\right],& \text{for } t\geq5\pi. \end{cases} \end{equation} This shows that the disturbance vector affects the system dynamics from $t=10s$ onwards. At $t=5\pi$, the second derivative of the disturbance does not exist. The parameters of the controllers are $\beta=3$ and $\alpha=0.97$. Since the control methods use information from previous time instants, the control signals from M1 and modified M1 (MM1) are activated only from time step 2 onwards while those of M2 and modified M2 (MM2) start working from time step 3. We conducted two experiments: a noise free one and a noisy case. (In the conference version of the paper \cite{nguyen2017improvement}, only the noise free case was considered). Figs. \ref{figu1} and \ref{figu2} reveal that the modified versions produce control signals of less magnitude than the ones using the original control methods. Specifically, the largest magnitudes during the transient of the control law using M1 and M2 are about 23 and 25 respectively; meanwhile, the control effort using method MM1 and MM2 generates signals whose values are about 2. This suggests that the proposed schemes are able to maintain the control signals at low gain levels. At $t=10s$ and $t=5\pi s$, the control signal using method MM1 exhibits less fluctuation than that using method MM2. This suggests that method MM1 is less sensitive to distubances than method MM2. In Figs. \ref{figx1} and \ref{figx2}, the evolution of the state variables using M1 and M2 is slightly better than that using MM1 and MM2. The sliding functions are presented in Figs. \ref{figs1} and \ref{figs2} showing that M1 and M2 perform better than their counterparts. These numerical results illustrate our theoretical analysis. A noise profile is added to the outputs of the system in the form of a uniformly distributed random signal, whose range lies in the interval $[-0.005,0.005]$. It is shown in Figs. \ref{figu1n}, \ref{figu2n} that MM1 and MM2 generate less control effort than M1 and M2. The largest magnitudes of the control signals using M1 and M2 are again about 24 and 23 respectively. In contrast, the magnitudes of the control efforts using MM1 and MM2 are about 3 and 4 respectively. Figs. \ref{figx1n}, \ref{figx2n}, \ref{figs1n}, \ref{figs2n} show that the evolution of the state variables and sliding functions using MM1 and MM2 is much better than their counterparts. MM1 performs best in this scenario as its control signals are less sensitive to noise than the others. It should also be observed that the performance of MM2 and M1 is comparable. In both cases, the numerical simulations reveal that the proposed schemes are effective in avoiding high gain control efforts. In the absence of noise, M1 and M2 perform better than MM1 and MM2, but in contrast, in the presence of noise, MM1 and MM2 outperform their counterparts. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{u1} \caption{The evolution of the control signals using M1 and MM1 in the noise-free case} \label{figu1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{u2} \caption{The evolution of the control signals using M2 and MM2 in the noise-free case} \label{figu2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{x1} \caption{The evolution of the state variables using M1 and MM1 for the noise-free case} \label{figx1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{x2} \caption{The evolution of the state variables using M2 and MM2 for the noise-free case} \label{figx2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{s1} \caption{The evolution of the sliding functions using M1 and MM1 for the noise-free case} \label{figs1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{s2} \caption{The evolution of the sliding functions using M2 and MM2 for the noise-free case} \label{figs2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{u1n} \caption{The evolution of the control signals using M1 and MM1 for the noisy case} \label{figu1n} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{u2n} \caption{The evolution of the control signals using M2 and MM2 for the noisy case} \label{figu2n} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{x1n} \caption{The evolution of the state variables using M1 and MM1 for the noisy case} \label{figx1n} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{x2n} \caption{The evolution of the state variables using M2 and MM2 for the noisy case} \label{figx2n} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{s1n} \caption{The evolution of the sliding functions using M1 and MM1 for the noisy case} \label{figs1n} \end{figure} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{s2n} \caption{The evolution of the sliding functions using M2 and MM2 for the noisy case} \label{figs2n} \end{figure} \section{CONCLUSIONS} The high gain nature of previous output feedback sliding mode control schemes was addressed, wherein a control signal of magnitude $O(1/T)$ could occur. New modifications were proposed to alleviate possible high gain control efforts, which can be of the order of $O(1)$. The theoretical analysis shows that the accuracy of the sliding mode using the modified versions of M1 and M2 are $O(T)$ and $O(T^2)$, while their original forms offer a $O(T^2)$ and $O(T^3)$ boundary layer for the sliding motion respectively. Simulation results have showed the effectiveness of the proposed schemes. The proposed scheme is applied to linear sampled-data system with relative degree one. Future work will investigate control methods for systems with higher relative degree. Output feedback sliding mode control for nonlinear sampled-data system is also a possible future direction. Practical experiments will be conducted to verify the proposed approach. \bibliographystyle{apalike-refs}
{ "timestamp": "2019-04-16T02:07:42", "yymm": "1904", "arxiv_id": "1904.06489", "language": "en", "url": "https://arxiv.org/abs/1904.06489" }
\section{Introduction} For more than two decades there has been interest in surface nanobubbles, which can form when a hydrophobic surface is fully immersed in liquid {\cite{parker1994bubbles, craig2011very, lohse2015surface, alheshibri2016history}}. Due to the high Laplace pressure inside a hemispherical cap shaped nanobubble, we might expect the gas inside to dissolve and diffuse away in microseconds \cite{ljunggren1997lifetime}. However, in reality they can sometimes remain stable for many hours or even up to days \cite{craig2011very, lohse2015surface, stevens2005effects, simonsen2004nanobubbles}. The existence of surface nanobubbles at the solid-liquid interface plays a significant role in a number of chemical and physical processes, such as flotation in mineral processing \cite{hampton2009accumulation}, design of microdevices \cite{paxton2004catalytic} and drug delivery to cancer cells \cite{janib2010imaging}. As well as the wide range of applications, there are also theoretical challenges to understanding the fundamental physical properties of nanobubbles which has also attracted the attention of many scientists. These surface nanobubbles contain air molecules that have come out of solution in the liquid, and are not purely filled with the vapour phase. To properly describe such a system, one must treat the full two component system of solvent liquid and solute air molecules. However, as a precursor to tackling the full binary mixture problem, the situation that must be first understood is that of the pure liquid and the properties of nanobubbles of the vapour that may appear between the liquid and a solid surface. It is this aspect that we discuss in the present paper. Our approach is to use a microscopic (i.e.\ particle resolved) classical density functional theory (DFT) \cite{evans1979nature, hansen2013theory} based method to calculate a coarse grained effective interfacial free energy (often called the binding potential, which is defined below) for vapour nanobubbles. { There are, of course, other computer simulation methods by which this can be done \cite{macdowell2011computer, tretyakov2013parameter, benet2016premelting, jain2019using}. The resulting binding potential} is then input into a mesoscopic interfacial free energy functional for determining the height profile of the nanobubbles. This also allows us to calculate the total free energy of such a nanobubble and how it depends on the interaction potential between the surface and the fluid particles, thereby allowing us to estimate the relative probabilities for observing nanobubbles as a function of size and surface properties. \begin{figure}[b] \includegraphics[width=1.\columnwidth]{Fig1.pdf} \caption{\label{2dbubble} Sketch of a vapour bubble with height profile $z=h(x,y)$, surrounded by liquid, on top of a solid planar wall that exerts an external potential $V_{ext}(z)$ on the fluid. The coordinate direction $z$ is perpendicular to the solid surface and the $x$- and $y$-axes are parallel to the surface. {The contact angle of the liquid with the wall is $\theta$.}} \end{figure} The system we model here is a very small bubble of vapour located on a planar solid surface that is in contact with a bulk liquid. The height of the liquid-vapour interface is defined to be at $h(x,y)$ above the surface, where $(x,y)$ is the position on the surface. A sketch of the system is displayed in Fig.\ \ref{2dbubble}, illustrating a cross section through a (nanometre scaled) vapour bubble. To develop an understanding of such a bubble, $h(x,y)$ is a key quantity to be determined, as is the contact angle the liquid-vapour interface makes with the substrate. This, via Young's equation \cite{de2013capillarity}, is related to thermodynamic quantities, namely the three interfacial tensions: $\gamma_{lv}$, $\gamma_{sl}$ and $\gamma_{sv}$, which are the liquid-vapour, solid-liquid and solid-vapour interfacial tensions, respectively. Of course, for larger bubbles $h(x,y)$ is of the shape of a hemispherical cap, because this minimises the area of the liquid-vapour interface and so also the free energy of the system. However, near the contact line { (i.e.\ where the three phases meet)} there is an additional contribution to the free energy from the binding (or interfacial) potential $g(h)$, which results from molecular interactions. This influences the shape of $h(x,y)$ near the contact line and for nanobubbles is particularly important and can influence the overall shape of $h(x,y)$. The contribution to the pressure within the bubble can be expressed in terms of the Derjaguin (or disjoining) pressure $\Pi(h) = - \partial g(h)/\partial h$ \cite{de2013capillarity} and its effects can be observed experimentally \cite{zhang2008thermodynamic}. The physics of vapour bubbles on surfaces shares many similarities with the more commonly studied system of liquid droplets on a surface, surrounded by the vapour. In both cases, the two main contributions to the excess free energy $F[h]$ of the system due to the interface are the binding potential contribution (i.e.\ due to the molecular interactions), and the surface tension contribution (proportional to the area of the liquid-vapour interface), which gives \cite{dietrich1988inphase, schick1990liquids, de2013capillarity} \begin{equation}\label{IH} F_{\textrm{IH}}\left[h\right] = \iint\left[g\left(h\right)+\gamma_{lv}\sqrt{1+\left(\nabla h\right)^2}\right]\mathrm dx\mathrm dy. \end{equation} This free energy is often termed an interfacial Hamiltonian (IH). Note that in Eq.~(\ref{IH}) we have omitted terms independent of $h$ -- see Eq.~(\ref{realbp}) below. To study nanobubbles, in Ref.~\cite{svetovoy2016effect} a simple approximate form for the binding potential $g(h)$ was postulated, since although much can be inferred about the qualitative form of $g(h)$ from various considerations \cite{dietrich1988inphase, schick1990liquids, de2013capillarity}, its precise form is not known exactly. The model of Ref.~\cite{svetovoy2016effect} includes contributions to $g(h)$ due to the van der Waals forces. Our approach here is to develop a model for vapour nanobubbles at equilibrium, based on calculating the binding potential $g(h)$ using DFT for all values of $h$, that can then be used as an input to the IH model. Since DFT incorporates the effects of the compressibility of the vapour, these effects are also incorporated into $g(h)$ when it is calculated using our approach. DFT is a hugely powerful and widely used microscopic statistical mechanical theory for calculating the density profile $\rho(\mathbf{r})$ for inhomogeneous systems of interacting particles, where $\mathbf{r}=(x,y,z)$. An advantage of DFT is that it gives a molecular-level detail description (as does, e.g.\ molecular dynamics computer simulations), but the computer time taken to solve DFT is small, particularly when the fluid average density profile only varies in one direction (e.g.\ perpendicular to the wall). DFT is especially suitable for determining excess thermodynamic quantities, arising from inhomogeneities in the fluid density distribution due to the presence of interfaces. There are numerous works applying DFT to study the wetting and drying interfacial phase behaviour of liquids -- see for example Refs.~\cite{meister1985density, tarazona1987phase, dietrich1988inphase, van1991wetting, henderson1992weighted, evans92, wu2006density, hughes2014introduction, evans2017drying, andreasPoFstructure, LuisPaper}. { Since DFT is an accurate theory for the spatial variations in the particle density, it thereby incorporates the effects of vapour compressibility, which are believed to be important for nanobubbles.} To determine the binding potential using DFT, one must calculate a series of constrained density profiles, the constraint being that the adsorption { $\Gamma$ (rather than the vapour thickness $h$) takes a series of specified values. Recall that $\Gamma=N_{ex}/A$, where $N_{ex}$ is the excess number of particles in the system due to the presence of the interface, which has area $A$ \cite{evans1990fluids}. Constraining $\Gamma$} can be done using the method proposed in Ref.~\cite{archer2011nucleation} and further developed by Hughes {\it et al.}\ \cite{hughes2015liquid, hughes2017influence}. These works showed that the required constraint takes the form of a {\em fictitious} external potential that can be calculated self-consistently as part of the algorithm for determining the constrained density profile. Hughes {\it et al.}\ \cite{hughes2015liquid, hughes2017influence} applied the method to determine the binding potential for films of liquid adsorbed on a surface in contact with a bulk vapour. Taking the resulting binding potentials together with the IH \eqref{IH} results in droplet profiles that are in excellent agreement with those obtained from solving the full DFT to determine the droplet profile \cite{hughes2015liquid, hughes2017influence}, validating the overall coarse graining approach. Further validation comes from Ref.~\cite{buller2017nudged} where two other completely different approaches for obtaining $g(\Gamma)$ were used that nonetheless produce identical results. These two approaches are: (i) applying the nudged-elastic-band algorithm to connect the sequence of density profiles required to calculate $g$ and (ii) a method based on an overdamped nonconserved dynamics to explore the underlying free-energy landscape. For liquid droplets, the resulting binding potential can also be input into a thin film hydrodynamic equation to study the dynamics of liquid droplets on surfaces \cite{yin2017films}. Having calculated the binding potential $g$ as a function of the adsorption $\Gamma$, it is straightforward to relate this to the height $h$ of the vapour-liquid interface above the surface of the substrate. Note, however, that the adsorption $\Gamma$ is a more appropriate measure of the amount of a particular phase on a substrate than the height $h$ when the amounts are small and on microscopic length scales, e.g.\ when there is sub-monolayer adsorption at an interface \cite{hughes2015liquid, hughes2017influence,yin2017films}. The adsorption is defined as \begin{equation} \Gamma (x,y)= \int_0^\infty(\rho(\mathbf r) - \rho_b) \mathrm {d}z, \label{eq:ads} \end{equation} where $\rho_b$ is the bulk fluid density and we have assumed the $z$-axis is perpendicular to the substrate, which has its planar surface at $z=0$. The corresponding height $h(x,y)$, quantifying the amount of the phase that is on the substrate, may be defined in a number of ways. This lack of a unique definition is another reason why $\Gamma(x,y)$ is a better measure. For example, one could define $h(x,y)$ to be the position where the average density $\rho(x,y,z=h)=(\rho_b+\rho_a)/2$, which is the average of the bulk density and the density of the phase adsorbed on the substrate, $\rho_a$. However, here we prefer to define $h$ as \cite{hughes2015liquid, hughes2017influence, yin2017films} \begin{equation} h(x,y) \equiv \frac{\Gamma(x,y)}{\rho_a-\rho_b}. \label{eq:bubble_height} \end{equation} In the situation where the bulk phase is the vapour (with density $\rho_b=\rho_v$) and the phase adsorbed on the surface is the liquid (with density $\rho_a=\rho_l$), then this is a widely used definition. Note also that in the case when the liquid is the bulk phase ($\rho_b=\rho_l$) and it is the vapour that is adsorbed at the interface ($\rho_b=\rho_v$), then in general both of the quantities in the numerator and denominator on the right hand side of Eq.~\eqref{eq:bubble_height} are negative, but of course still giving a positive thickness $h$. This paper is structured as follows: Some background on the relevant interfacial thermodynamics and the definition of $g(h)$ is given in Sec.~\ref{sec:int_thermo}. In Sec.~\ref{sec:DFT_approach} we describe briefly the DFT based method we apply for calculating $g(h)$ for vapour films adsorbed between a planar wall and a bulk liquid. Then, in Sec.~\ref{sec:model_fluid}, we introduce the model fluid that we consider, the approximate DFT used to treat this fluid and the various different wall potentials that we consider. In Sec.~\ref{sec:results} we present results for $g(\Gamma)$, for various different wall potentials and how the decay form of the wall potential moving away from the wall influences the decay form of $g(\Gamma)$. Following this, in Sec.~\ref{sec:bubble_profiles} we input the obtained binding potentials into the interfacial Hamiltonian \eqref{IH}, in order to determine vapour nanobubble height profiles and their free energies. Finally, in Sec.~\ref{sec:conc} we draw our conclusions. \section{Interfacial thermodynamics for vapour adsorption}\label{sec:int_thermo} \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig2.pdf} \caption{\label{1dinterface}A schematic diagram of a uniform thickness layer of vapour adsorbed at the interface between a planar solid substrate and the bulk liquid. The thickness of the vapour film is $h$.} \end{figure} Consider the system illustrated in Fig.~\ref{1dinterface}. Treating it in the grand canonical ensemble, the grand potential $\Omega$ is the relevant free energy to consider, which is minimised when the system is at equilibrium. To describe the interfacial phase behaviour, we follow the usual procedure \cite{rowlinson1982molecular} and consider surface excess quantities; in this case it is the excess grand potential per unit area \begin{equation} \frac{\Omega_{ex}}{A} = \frac{\Omega - \Omega_b}{A}, \end{equation} where $\Omega_b=-pV$ is the grand potential for a bulk system having the same volume $V$ and pressure $p$, but with no interface and where $A$ is the area of the wall. This can be split into the following contributions \begin{equation}\label{excessbubble} \frac{\Omega_{ex}(h)}{A} = \gamma_{lv}+\gamma_{sv} + h\delta p+g(h), \end{equation} where $\delta p=p-p_v$ is the pressure difference between the pressure of the bulk liquid and that of the corresponding vapour at the same chemical potential $\mu$. If the system is at bulk vapour-liquid coexistence, then this term is zero. The interfacial tensions $\gamma_{lv}$ and $\gamma_{sv}$ can be calculated using DFT in the usual way \cite{evans1979nature, hansen2013theory, wu2006density, hughes2014introduction}. The above equation may be viewed as defining the binding potential: it is the `remainder' after the other terms have been subtracted, i.e.\ at bulk vapour-liquid coexistence, with $\delta p = 0$, we have \cite{macdowell2011computer} \begin{equation}\label{realbp} g(\Gamma) = \frac{\Omega+pV}{A} - \gamma_{lv}-\gamma_{sv}. \end{equation} When $\Gamma$ $\rightarrow \infty$ the two interfaces are far from one another, so they do not influence each other, and therefore we have $g(\Gamma)\rightarrow 0$. However, when $\Gamma=\Gamma_0$, the value at the minimum of the binding potential, we have \cite{de2013capillarity} \begin{equation} g(\Gamma_0) = \gamma_{sl}-\gamma_{sv}-\gamma_{lv}. \end{equation} Using Young equation \cite{young1805essay} $\gamma_{lv}\cos\theta = \gamma_{sv}-\gamma_{sl}$, we obtain \cite{de2013capillarity, rauscher2008wetting, DeCh1974jcis} \begin{equation}\label{youngs} \cos \theta = \frac{\gamma_{sv}-\gamma_{sl}}{\gamma_{lv}}=-1-\frac{g(\Gamma_0)}{\gamma_{lv}}, \end{equation} where $\theta$ is the equilibrium contact angle, measured as in the usual definition as the angle through {the} liquid phase. Therefore, this is the outer angle on bubbles and so we have the opposite sign in this equation compared to when considering liquid drops. { Note that if the system is away from coexistence, with $\delta p\neq0$, then the equilibrium state is not at $\Gamma=\Gamma_0$, the value at the minimum of $g(\Gamma)$. Instead, by minimising the excess grand potential in Eq.~\eqref{excessbubble} with respect to variations in $h$, we see that the equilibrium is given by $\frac{\partial}{\partial h}(h\delta p+ g(h))=0$, i.e.\ the equilibrium film thickness is the solution of $g'(h)+\delta p=0$. When $\delta p$ is small it can also be useful to use the Gibbs-Duhem relation $(\partial p/\partial \mu)_T=\rho$ (see e.g.\ \cite{evans1987phase}) to show that $\delta p=\Delta\rho\delta\mu$ when $\delta\mu$ is small, where $\Delta\rho=(\rho_l-\rho_v)$ and $\delta\mu=(\mu-\mu_{coex})$, enabling one to determine the equilibrium film thickness (i.e.\ adsorption) as a function of $\delta\mu$.} \section{DFT approach to calculate $g(\Gamma)$}\label{sec:DFT_approach} In DFT \cite{evans1979nature, hansen2013theory} we find that the grand potential is the following functional of the fluid density profile $\rho(\mathbf{r})$: \begin{equation}\label{grand} \Omega[\rho(\mathbf{r})] = F[\rho(\mathbf{r})] + \int \rho(\mathbf{r})(V_{ext}(\mathbf{r})-\mu) \mathrm{d}\mathbf{r}, \end{equation} where $V_{ext}(\mathbf{r})$ is the external potential felt by a single particle at position $\mathbf{r}$ (i.e.\ the potential due to the solid substrate in the treatment here), $\mu$ is the chemical potential and \begin{equation}\label{Fe} F[\rho(\mathbf{r})]=k_BT\int\rho(\mathbf{r})(\ln[\Lambda^3\rho(\mathbf{r})]-1)\mathrm{d}\mathbf{r} +F_{ex}[\rho(\mathbf{r})] \end{equation} is the intrinsic Helmholtz free energy. The first term is the ideal-gas contribution and $F_{ex}$ is the excess part due to the interactions between the fluid particles. In the ideal-gas part, $k_B$ is Boltzmann's constant, $T$ is the temperature and $\Lambda$ is the thermal de Broglie wavelength. The equilibrium fluid density profile is that which minimises $\Omega[\rho(\mathbf{r})] $, i.e.\ it satisfies the Euler-Lagrange equation \begin{equation}\label{euler} \frac{\delta \Omega}{\delta \rho(\mathbf r)}=k_BT\ln[\Lambda^3\rho(\mathbf{r})]+\frac{\delta F_{ex}}{\delta\rho}+V_{ext}(\mathbf{r})-\mu=0. \end{equation} This equation may be rearranged to obtain \begin{equation} \rho(\mathbf{r})=\Lambda^{-3}e^{\beta[\mu-\frac{\delta F_{ex}}{\delta\rho}-V_{ext}(\mathbf{r})]}, \label{density} \end{equation} where $\beta=(k_BT)^{-1}$. This is the form usually used for solving DFT numerically using a Picard iterative process \cite{hughes2014introduction, roth2010}. This consists of constructing a sequence of approximate solutions, indexed by the integer $k$, such that the $(k+1)$th approximation is obtained from the previous $k$th approximation, and with each successively closer to the true density profile. We start by guessing an initial density profile (for example the ideal-gas result), and calculate a new profile $\rho_{rhs}$ via the right hand side of Eq.~(\ref{density}). Then, a fraction of this new profile is mixed with the previous approximation for the profile $\rho_{k}$, to compute the new approximation \begin{equation} \rho_{k+1}=\alpha\rho_{rhs}+(1-\alpha)\rho_{k}. \label{eq:mixing} \end{equation} This equation is then iterated till convergence to the desired tolerance is achieved. Here, $\alpha$ is the mixing parameter, which is typically in the range $0.1>\alpha>0.01$, for the algorithm to be numerically stable. Solving the Euler-Lagrange equation (\ref{euler}) as described above gives the equilibrium fluid density profile that has adsorption $\Gamma_0$, as determined by Eq.~\eqref{eq:ads}. On substituting this density profile into Eq.~(\ref{grand}), together with Eq.~\eqref{realbp}, we obtain the minimum value of the binding potential. To find the full binding potential curve $g(\Gamma)$, requires calculating for a series of points over a range of different values of the adsorption $\Gamma$. As mentioned above, we do this by applying the fictitious potential approach developed and applied in Refs.\ \cite{archer2011nucleation, hughes2015liquid, hughes2017influence, buller2017nudged}. This method constrains the adsorption of the system to be a desired value by modifying the Picard iteration by replacing $\rho_{rhs}$ in Eq.~\eqref{eq:mixing} with \begin{equation} \rho_{new}=(\rho_{rhs}-\rho_b)\frac{\Gamma_d}{\Gamma_{rhs}}+\rho_b, \end{equation} where $\Gamma_{rhs}$ is the adsorption corresponding to the profile $\rho_{rhs}$ calculated via Eq.~\eqref{eq:ads} and $\Gamma_d$ is the desired value of the adsorption. \begin{figure}[t] \includegraphics[width = 1.\columnwidth]{Fig3.pdf} \caption{\label{density_09}A sequence of density profiles with decreasing adsorption corresponding to increasing thickness films of vapour between a wall and the bulk liquid. The adsorption values for each are $\Gamma\sigma^2=-0.0$, $-0.8$, $-2.8$, $-4.8$, $-6.8$, $-8.8$, $-10.8$, $-12.8$ and $-14.8$, { where $\sigma$ is the diameter of the cores of the particles}. The strength of the attraction between the fluid particles is $\beta\epsilon=0.5$, with { range $\lambda=\sigma$}, and the system is at vapour-liquid coexistence, with $\mu=\mu_{coex}$. The wall potential is that in Eq.~\eqref{Yukawall}, with $\beta\epsilon_{w}^{(Y)}=1.817$ and $\lambda_{w}^{(Y)}/\sigma=1$. The inset shows the resulting binding potential, with the points on the curve corresponding to the sequence of density profiles displayed in the main figure.} \end{figure} A typical series of the constrained density profiles calculated using this procedure are displayed in Fig.~\ref{density_09}. These results are for the model fluid defined below, with fixed wall attraction strength. The inset shows the corresponding binding potential $g(\Gamma)$. The global minimum occurs at a small negative value of the adsorption, which corresponds to a partially drying liquid. In the density profiles there is peak near to the wall, corresponding to some particles being adsorbed preferentially at a particular distance from the surface of the wall. In the second density profile, which corresponds to the minimum in the binding potential, there are some oscillations near the wall, due to packing effects of the particles. As the adsorption becomes increasingly negative, there is an increasingly thick film of the vapour near the wall, and also as the thickness increases, the vapour density in the film becomes closer to that of the vapour at bulk vapour-liquid coexistence. \section{Model fluid}\label{sec:model_fluid} \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig4.pdf} \caption{\label{fig:pair_pot}{The Yukawa pair potential \eqref{eq:Yuk_pot}, with $\lambda=\sigma$, which is the interaction potential between the fluid particles in our system, plotted as a function of $r$, the distance between the centres of the particles. The parameter $\epsilon$ determines the strength of the attraction for $r>\sigma$, where $\sigma$ is the diameter of the (hard) cores of the particles.}} \end{figure} The model fluid that we consider consists of a system of particles interacting via a pair potential that can be split as follows: \begin{equation} v(r) = v_0(r)+v_1(r), \end{equation} where $r$ is the distance between the centres of the pairs of particles and $v_0(r)$, the repulsive-core part of the potential, is treated via the hard-sphere potential \begin{equation} v_0(r)=\begin{cases} \infty & \text{if $ 0<r\leq \sigma $},\\ 0 & \text{if $ \sigma < r$}, \end{cases} \end{equation} where $\sigma$ is the diameter of the cores of the particles. We model the attractive part of the potential $v_1(r)$ via the following Yukawa potential \begin{equation} {v_1}(r)=\begin{cases} \enspace -\epsilon & \text{if $ 0<r\leq \sigma $},\\ \enspace \frac{-\epsilon e^{-(r-\sigma)/\lambda}}{r/\sigma} & \text{if $ \sigma < r$}, \end{cases}\label{eq:Yuk_pot} \end{equation} where the range of the potential is defined by the length parameter $\lambda$ and the strength of the attraction is determined by the interaction energy parameter $\epsilon$. {A plot of the pair potential \eqref{eq:Yuk_pot} is displayed in Fig.~\ref{fig:pair_pot}, for $\lambda=\sigma$, the value used throughout this paper.} We use this Yukawa model potential because it is a widely studied model fluid in DFT, see e.g.\ Refs.~\cite{sullivan1979van, evans1986capillary, tarazona1987phase, louis2002effective, archer2013relationship} for a few examples from over the years, providing a good model for simple liquids \cite{hansen2013theory}. \subsection{DFT implemented} To treat this model fluid using DFT, we must develop an approximation for the excess Helmholtz free energy functional $F_{ex}$ in Eq.~\eqref{Fe}. We make a standard approximation, and treat the contribution to the free energy from the hard-sphere repulsions via fundamental measure theory (FMT) DFT and the attractive part via a van der Waals mean field like contribution \cite{evans1979nature, hansen2013theory, evans92, wu2006density, roth2010, PhysRevLett.63.980}, that is nonetheless fairly accurate \cite{archer2017standard}. Thus, the approximation we make is \begin{equation} F_{ex}[\rho(\mathbf r)] = F_{hs} + \frac{1}{2}\iint \rho(\mathbf {r}_1)\rho( \mathbf {r}_2) v_1 (\left|\mathbf {r}_1- \mathbf {r}_2 \right|)\mathrm d\mathbf {r}_1 \mathrm d\mathbf {r}_2, \label{eq:F_ex} \end{equation} where $F_{hs}$ is the hard-sphere contribution to the free energy, that we treat using Rosenfeld's original version of FMT \cite{PhysRevLett.63.980}. There are more modern FMTs that are more accurate when the fluid density is high and approaching freezing \cite{hansen2013theory, roth2010, PhysRevLett.63.980}, but for the present study, Rosenfeld is sufficiently accurate. \subsection{Bulk fluid phase diagram} For bulk liquid-vapour coexistence to occur, the temperature $T$, pressure $p$ and chemical potential $\mu$ must be equal in the two coexisting phases. Substituting into Eqs.~\eqref{Fe} and \eqref{eq:F_ex} that the fluid density is a constant $\rho(\mathbf {r})=\rho=N/V$, where $N$ is the average number of particles in the system, and $V$ is the volume, then we obtain the Helmholtz free energy of the uniform fluid. The pressure is then obtained from this expression as the derivative $p=(\partial F/\partial V)_{N,T}$ and the chemical potential as $\mu=-(\partial F/\partial N)_{V,T}$. From these two relations, we can then write down a set of simultaneous equations for the coexisting vapour and liquid densities, $\rho_v$ and $\rho_l$, respectively, which are then solved for numerically over a range of temperatures to obtain the bulk fluid binodal \cite{hansen2013theory}. \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig5.pdf} \caption{\label{phasediagram} Bulk fluid phase diagram in the temperature versus density plane, for the system with $\lambda=\sigma$. The solid line corresponds to the binodal curve and the dashed line corresponds to the spinodal curve.} \end{figure} In Fig.~\ref{phasediagram} we display the resulting bulk fluid phase diagram, showing the binodal curve giving the two distinct densities of the vapour and liquid phases at bulk coexistence. As the temperature $T$ is increased, the density difference between the two coexisting phases decreases and finally becomes zero at the critical temperature $T_c$. The fluid in the area of the phase diagram outside the binodal curve corresponds to the single phase region, where there is no phase separation. Inside is the two phase region, where vapour-liquid coexistence occurs. We also display the spinodal, which is given by the condition $ \partial^2 f/\partial \rho^2 = 0$, where $f=F/V$ is the free energy per unit volume. Inside the spinodal curve spontaneous phase separation occurs, whilst between the spinodal and the binodal, phase separation is a nucleated process, with a free energy barrier that must be surmounted by thermal fluctuations. In the present work, we perform calculations at $k_BT/\epsilon=2$, which is sufficiently far from the critical point to see well separated bulk densities of $\rho_l \sigma^3 = 0.61$ and $\rho_v \sigma^3 =0.03$, { where at coexistence $\mu=\mu_{coex}$ the pressure $\beta\sigma^3p=0.026$} \subsection{External potential due to the wall}\label{subsec:ext_pots} We assume that the planar solid substrate exerts an external potential on the fluid that varies in only one Cartesian direction, along the $z$-axis, which is perpendicular to the plane of the substrate. Having chosen to model the fluid particle-particle interactions via the Yukawa pair potential in Eq.~\eqref{eq:Yuk_pot}, an obvious choice for the potential between the particles and the wall is also a Yukawa: \begin{equation}\label{Yukawall} {V_{ext}}^{(Y)}(z)=\begin{cases} \enspace \infty & \text{if $z<\frac{\sigma}{2}$} \\ \enspace \frac{-\epsilon_{w}^{(Y)}e^{-z/\lambda_w^{(Y)}}}{z/\sigma}& \text{if $ z\geq\frac{\sigma}{2}$}, \end{cases} \end{equation} where the parameters $\epsilon_{w}^{(Y)}$ and $\lambda_w^{(Y)}$ determine the strength of the attraction to the wall and the range, respectively. We also consider the behaviour of the fluid in the presence of a wall with a {$z^{-3}$} power-law form for the decay of the attractive part of the potential. {Such a potential can be viewed as originating from the $r^{-6}$} decay form of the potential due to dispersion interactions that is found in e.g.\ the Lennard-Jones (LJ) model pair potential \cite{hansen2013theory}. {If one assumes a semi-infinite wall of uniform density and then integrates over the total attractive contribution due to the wall, treating all the elements as interacting with a given fluid particle with a potential decaying $\propto r^{-6}$, then} the resulting form is {(see e.g.\ Ref.~\cite{chacko2017solvent})} \begin{equation}\label{LJwall} {V_{ext}}^{(LJ)}(z)=\begin{cases} \enspace \infty & \text{if $z<\frac{\sigma}{2}$}, \\ \enspace \frac{-\epsilon_{w}^{(LJ)}}{\left(z/\sigma\right)^3} & \text{if $z\geq\frac{\sigma}{2}$}, \end{cases} \end{equation} where the parameter $\epsilon_{w}^{(LJ)}$ defines the strength of the attraction in this potential. Another wall potential that we consider is one with a short-ranged attraction, decaying with a Gaussian form \cite{archer2002wetting} \begin{equation} {V_{ext}}^{(G)}(z)=\begin{cases} \enspace \infty & \text{if $z<\frac{\sigma}{2}$} \\ \enspace -\epsilon_{w}^{(G)}e^{-(z/\lambda_w^{(G)})^2}& \text{if $z\geq\frac{\sigma}{2}$}, \end{cases} \end{equation} where the parameters $\epsilon_{w}^{(G)}$ and $\lambda_w^{(G)}$ define the strength and range of this potential. Finally, we also consider a wall potential that has exponential decay \begin{equation}\label{expform} {V_{ext}}^{(E)}(z)=\begin{cases} \enspace \infty & \text{if $z<\frac{\sigma}{2}$} \\ \enspace -\epsilon_{w}^{(E)}e^{-z/\lambda_w^{(E)}}& \text{if $z\geq\frac{\sigma}{2}$}, \end{cases} \end{equation} with parameters $\epsilon_{w}^{(E)}$ and $\lambda_w^{(E)}$ determining the strength and range of the potential. The reason that we consider all these different potentials is that the form of the decay as $z\to\infty$ influences the form of the decay of $g(h)$ for $h\to\infty$ \cite{dietrich1988inphase, archer2002wetting}, as we also show below. \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig6a.pdf} \includegraphics[width=1.\columnwidth]{Fig6b.pdf} \caption{\label{yukawa_vary_ew}A sequence of binding potentials $g(\Gamma)$, for varying wall attraction strength. The fluid pair interactions have $\beta\epsilon=0.5$ and $\lambda/\sigma=1$. In (a) we display results for the Yukawa wall potential (\ref{Yukawall}), for varying $\beta\epsilon_{w}^{(Y)}$ as given in the key, whilst in (b) are results for the LJ-like wall (\ref{LJwall}) with varying $\beta\epsilon_{w}^{(LJ)}$. The inset shows the binding potential for the strongly attractive wall with $\beta\epsilon_{w}^{(LJ)}=0.45$. In all except this last case the binding potentials are smooth and featureless, but in this case some small amplitude oscillations can be seen in $g(\Gamma)$. } \end{figure} {All our calculations of density profiles are performed on a regular grid with $2^{13}$ points and a grid spacing $dz=0.02\sigma$, so that the total domain length is $164\sigma$. This has the wall at one end of the system and a section at the other end with $\rho(z)=\rho_l$ (i.e.\ the bulk density boundary condition), followed by a section where $\rho(z)=0$, to provide padding for the fast Fourier transforms used to evaluate the convolution integrals. For more details on how to calculate density profiles using DFT see Ref.~\cite{roth2010}.} \section{Results for the binding potential}\label{sec:results} We calculate the binding potentials $g(\Gamma)$ for a range of different values of the adsorption $\Gamma$ using the procedure described above in Sec.~\ref{sec:DFT_approach}, for the various different wall potentials given in the previous section and for varying values of the attraction strength parameter. In Fig.~\ref{yukawa_vary_ew}(a) are results for the Yukawa wall potential (\ref{Yukawall}) and in Fig.~\ref{yukawa_vary_ew}(b) are results for the LJ-like wall potential (\ref{LJwall}). We see that in both cases, when the solid substrate is very weakly attractive, the global minimum of $g(\Gamma)$ is at $\Gamma \to -\infty$, corresponding to drying of the fluid from the wall being the equilibrium state of the system. For the more attractive substrates, the global minimum of the binding potentials is at a small negative value of the adsorption, which corresponds to the partial-drying situation. Our results are consistent with previous DFT predictions that the drying transition for these types of systems is a continuous (critical) transition -- see Ref.~\cite{evans2017drying} and references therein for an excellent recent discussion of this. It is interesting to note that this minimum in $g(\Gamma)$ is fairly broad and the binding potentials are rather smooth and featureless, despite the density profiles which go into calculating these having significant structure near the wall -- see Fig.~\ref{density_09}. The width of the minimum in $g(\Gamma)$ is certainly broader than the typical minima obtained in Ref.~\cite{hughes2017influence} for the case of liquid films adsorbed at a wall with the bulk phase being the vapour. We believe this is due to the fact that when there is the tendency towards drying at a solvophobic interface, there can be significant interfacial fluctuations \cite{jamadagni2011hydrophobicity, kanduc2016water, evans2015local, evans2015quantifying, evans2017drying, chacko2017solvent} and so in these cases any minima in $g(\Gamma)$ are fairly broad. In the inset to Fig.~\ref{yukawa_vary_ew}(b) we show the binding potential for a more strongly attracting wall, with $\beta\epsilon_{w}^{(LJ)}=0.45$. In this case, the liquid is more strongly attracted to the wall and so we see more layered packing effects at the wall in the corresponding density profiles (not displayed). In cases like this, convergence of the numerics become more difficult, because the system does not want the vapour phase at the wall, since the liquid is energetically much more favourable. We also see in this situation the appearance of some small amplitude oscillations in the binding potential, stemming from particle layering at the wall. \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig7a.pdf} \includegraphics[width=1.\columnwidth]{Fig7b.pdf} \caption{\label{comparelj}{In panel (a) we show a} comparison of the binding potentials corresponding to {the} four different external potentials defined in Sec.~\ref{subsec:ext_pots}. The bulk fluid is the same in all cases, with $\beta\epsilon=0.5$ and $\lambda/\sigma=1$. The parameters are chosen as given in the key and with $\lambda_{w}^{(Y)}=\lambda_{w}^{(G)}=\lambda_{w}^{(E)}=\lambda$ (all the same), so that they all have the same minimal value of $g(\Gamma_0)$ and therefore also the same macroscopic contact angle. {In panel (b) we display plots of the corresponding four different wall potentials.}} \end{figure} \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig8.pdf} \caption{\label{lg_comp} The same binding potentials as displayed in figure \ref{comparelj}{(a)}, except here we instead plot $\ln |g(\Gamma)|$ versus $\Gamma$.} \end{figure} In Fig.~\ref{comparelj}{(a)} we compare four binding potentials corresponding to the four different external potentials defined in Sec.~\ref{subsec:ext_pots}, with the wall potential attraction strength parameters chosen so that they all have the same minimal value of $g(\Gamma_0)$. Since the vapour-liquid interfacial tension $\beta\sigma^2 \gamma_{lv}=0.603$ is the same in all cases, this means that these all correspond to the same macroscopic contact angle, {because they all have the same minimum value of $g(\Gamma_0)$} -- see Eq.~\eqref{youngs}. It is interesting to note that the width of the potential minimum in $g(\Gamma)$ is not the same for each of these different wall potentials. This means the precise form of the external potential due to the wall is important for controlling the amplitude of interfacial fluctuations near the wall. We also see that the form of the external potential controls significantly the way $g(\Gamma)$ decays as $\Gamma\to-\infty$. This can be seen even more clearly in Fig.~\ref{lg_comp} where we instead plot $\ln |g(\Gamma)|$ versus $\Gamma$, which allows to observe more clearly the form of the asymptotic decay. The form of the asymptotic decay of binding potentials is discussed extensively in Refs.~\cite{dietrich1988inphase, schick1990liquids, henderson2005statistical}, and these results largely carry over to the case of drying at interfaces -- see Ref.~\cite{evans2017drying}. As one should expect, the slowest decay is for the LJ-like wall potential \eqref{LJwall}, since this has a power-law decay for $z\to\infty$. For the other three wall potentials the binding potential decays exponentially, so that when we plot $\ln|g(\Gamma)|$, we see in Fig.~\ref{lg_comp} a straight line. We see that the gradient is roughly the same for all three. This is because at this particular state point the correlation length in the vapour phase $\xi_v\approx \sigma=\lambda$, i.e.\ is very similar in value to the decay length of the wall potentials \eqref{Yukawall} and \eqref{expform}. For short-ranged wall-fluid and fluid-fluid potentials one should expect the binding potential to decay for $h\to\infty$ as \cite{dietrich1988inphase, schick1990liquids, archer2002wetting, evans2017drying} \begin{equation}\label{eq:gg} g(h)= a\exp(-h/\xi_v)+\cdots \end{equation} where $a$ is a constant and ``$\cdots$'' denotes faster decaying terms. So in this case, when one plots $\ln|g(\Gamma)|$, for large $\Gamma$ one sees a straight line with gradient equal to $-1/[\xi_v(\rho_v-\rho_l)]$. On the other hand, if there is an exponentially decaying wall potential \eqref{expform}, then one instead has \cite{archer2002wetting} \begin{equation}\label{eq:ggg} g(h)= a\exp(-h/\xi_v)+b\exp(-h/\lambda_w^{(E)})+\cdots, \end{equation} where $b$ is a constant, so whichever is bigger out of $\xi_v$ and $\lambda_w^{(E)}$ determines the ultimate decay of $g(h)$ for $h\to\infty$. When the wall potential has a Yukawa decay like in Eq.~\eqref{Yukawall}, then this can also determine the decay of $g(h)$, somewhat like in Eq.~\eqref{eq:ggg}, except with a renormalised decay length \cite{archer2002wetting}. {Note that for larger negative values of the adsorption the binding potential $g(\Gamma)$ becomes small and so on the logarithmic scale in Fig.~\ref{lg_comp} one sees the numerical round-off errors, appearing as random fluctuations with increasing amplitude as $\Gamma\to-\infty$.} {It is also interesting to note in Fig.~\ref{comparelj}(a) that all of the binding potentials have a finite value for $g(\Gamma\to0)$, but the values of $g(0)$ for the different wall potentials are all very different and in particular the result corresponding to the LJ wall is much higher. We believe the origin of this difference is the fact that the LJ wall potential \eqref{LJwall} has a deeper (but more narrow) potential minimum for $z\to\sigma/2^+$ than the other wall potentials, as can be seen in Fig.~\ref{comparelj}(b). This is also supported by the fact that the values of $g(0)$ are ordered in magnitude in the same order as the values of the wall potentials at contact, $V_{ext}^{(i)}(z\to\sigma/2^+)$. That the value of $g(0)$ must be finite was discussed in the context of liquid droplets at surfaces in Refs.~\cite{hughes2015liquid, hughes2017influence}. Indeed, $g(\Gamma)$ remains finite even for small positive values of $\Gamma$, which corresponds to a negative excess of vapour being adsorbed at the wall. However, the fact that $g(0)$ remains finite should not significantly affect the behaviour at the contact line, since the value at the minimum $g(\Gamma_0)$ is far more important than the value $g(0)$ in determining contact line properties.} In Fig.~\ref{comparelambda_exp} we display a set of binding potentials for the exponential wall potential \eqref{expform}, calculated for varying wall potential decay length $\lambda_w^{(E)}$. Increasing the range for fixed $\epsilon_w^{(E)}$ increases the overall integrated strength of the wall potential and so, of course, makes the liquid more favourable at the wall and the vapour less favourable. This is manifest in the increasingly deep minimum in $g(\Gamma)$, as $\lambda_w^{(E)}$ is increased. In Fig.~\ref{log_exp} we plot $\ln|g(\Gamma)|$, which allows one to see the crossover from the first term on the right hand side of Eq.~\eqref{eq:ggg} dominating the decay of $g(\Gamma)$, to the second term dominating, for larger $\lambda_w^{(E)}$. In the following section we take the binding potentials that we have calculated using DFT and input them into the IH \eqref{IH} in order to determine vapour nanobubble height profiles. To do this we fit the binding potential to obtain an analytic form which can then be input easily. The form we use is (c.f.\ Eq.~\eqref{eq:ggg} and also Refs.~\cite{hughes2015liquid, hughes2017influence}): \begin{equation} \label{eq:fit_g} g(\Gamma) = a_1e^{\frac{\Gamma}{l_0}}+a_2e^{\frac{2\Gamma}{l_0}}+a_3e^{\frac{3\Gamma}{l_0}}+\cdots \end{equation} where $l_0$, $a_1$, $a_2$, $a_3$, etc, are parameters to be fitted. {The values obtained for these parameters for all of the binding potentials displayed in this paper are given in Table~\ref{table:nonlin} in the Appendix.} Recall that $\Gamma$ is normally a negative quantity in Eq.~\eqref{eq:fit_g}. \begin{figure} \includegraphics[width=1.\columnwidth]{Fig9.pdf} \caption{\label{comparelambda_exp}A series of binding potentials for the exponential wall potential \eqref{expform} with varying $\lambda_w^{(E)}$ and fixed $\beta\epsilon_w^{(E)}=1$. The fluid pair interactions have $\beta\epsilon=0.5$ and $\lambda/\sigma=1$.} \end{figure} \begin{figure} \includegraphics[width=1.\columnwidth]{Fig10.pdf} \caption{\label{log_exp}\label{lg_comp2} The same binding potentials as displayed in Fig.~\ref{comparelambda_exp} for varying $\lambda_w^{(E)}$, but here we instead plot $\ln |g(\Gamma)|$ versus $\Gamma$.} \end{figure} \section{Vapour nanobubble profiles}\label{sec:bubble_profiles} \begin{figure} \includegraphics[width=1.\columnwidth]{Fig11.pdf} \caption{\label{comparebubbleprofile} A series of equilibrium vapour nanobubble height profiles $h(x)=\Gamma(x)/(\rho_v-\rho_l)$, calculated by minimising Eq.~\eqref{IH} together with the binding potentials for the fluid with $\beta\epsilon=0.5$ and $\lambda/\sigma=1$ at the Yukawa wall \eqref{Yukawall}, with fixed $\lambda_{w}^{(Y)}/\sigma=1$ and various values of the wall attraction parameter $\epsilon_{w}^{(Y)}$, as given in the key. The total area under all of the curves is $2727\sigma^2$ and the length of the domain $L=600\sigma$.} \end{figure} In Fig.~\ref{comparebubbleprofile}, we display a sequence of equilibrium vapour nanobubble height profiles $h(x)=\Gamma(x)/(\rho_v-\rho_l)$, calculated by minimising Eq.~\eqref{IH} together with binding potentials calculated using DFT. We do this for the fluid with interaction parameters $\beta\epsilon=0.5$ and $\lambda/\sigma=1$ at a series of walls with the Yukawa potential \eqref{Yukawall} with fixed $\lambda_{w}^{(Y)}/\sigma=1$ and various values of the wall attraction parameter $\epsilon_{w}^{(Y)}$. In Eq.~\eqref{IH} we set the liquid-vapour interfacial tension $\beta\sigma^2\gamma_{lv}=0.603$, the value we obtain from the DFT. We also assume for simplicity that the system is uniform in the $y$-direction, so strictly speaking the profiles that we calculate are actually for ridge-shaped nanobubbles. However, we do not expect results from calculating radially symmetric height profiles (varying in both the $x$- and $y$-directions) to have cross-section height profiles qualitatively different from the ones we calculate here. We apply periodic boundary conditions $h(x=0)=h(x=L)$, where $L$ is the length of the domain. The height profiles in Fig.~\ref{comparebubbleprofile} all have the same area under the curve (i.e.\ the same total adsorption). We numerically minimise the free energy \eqref{IH} by solving the corresponding thin-film equation with disjoining pressure $\Pi(h)=-\partial g/\partial h$ and converging to equilibrium, based on the approach of Ref.~\cite{yin2017films}. This uses the method of lines, with finite difference approximations for the spatial derivatives and the {\it ode15s} Matlab variable-step, variable-order solver \cite{matlabodesuite}. The initial guess to equilibrate from has a Gaussian shaped ``bump'' in it that breaks the symmetry and determines the final location of the nanobubble on the surface. In Fig.~\ref{comparebubbleprofile} we see that the vapour nanobubbles become more spread out over the surface as the attraction due to the wall is decreased. Then, for $\beta\epsilon_{w}^{(Y)}=0.6$, there is a uniform thickness film of vapour on the substrate. This corresponds to the drying transition and it occurs at the value of $\epsilon_{w}^{(Y)}$ that one must expect from inspecting the binding potential curves in Fig.~\ref{yukawa_vary_ew}(a), i.e.\ where the minimum in $g(h)$ at a finite value of $h$ disappears, {which occurs by the minimum value diverging $h\to\infty$, since this drying transition is continuous (critical).} For the profiles containing a nanobubble, the height of the vapour ``precursor'' film corresponds roughly to the value at the minimum in the binding potentials for the different values of $\beta\epsilon_{w}^{(Y)}$. However, in a finite size domain, the height is shifted slightly from the minimum value due to the Laplace pressure in the nanobubbles combined with the effects of mass conservation in our periodic domain. {The excess pressure due to the presence of the nanobubble has two components, \begin{equation}\label{eq:pressure_comps} \frac{\delta F_{\textrm{IH}}}{\delta h}=-\Pi(h(x))-\kappa(h(x)), \end{equation} where $F_{\textrm{IH}}$ is given in Eq.~\eqref{IH}, $\Pi$ is the disjoining pressure and the curvature contribution is \begin{equation}\label{eq:kappa} \kappa=\gamma_{lv}\nabla\cdot\left(\frac{\nabla h}{\sqrt{1+(\nabla h)^2}}\right). \end{equation} In Fig.~\ref{fig:press_components} we display the values of these two contributions to the excess pressure as a function of position through a nanobubble, for the case where $\beta\epsilon_w^{(Y)}=1.5$. The corresponding nanobubble height profile is displayed in Fig.~\ref{comparebubbleprofile}. We see in Fig.~\ref{fig:press_components} that these two pressure components vary significantly with $x$, in particular in the contact line region. Of course, the sum of these is a constant as this is the condition for equilibrium.} \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig12.pdf} \caption{The components of the excess pressure, $\Pi$ and $\kappa$, given by Eqs.~\eqref{eq:pressure_comps} and \eqref{eq:kappa}, for a nanobubble with volume $2727\sigma^2$ and wall attraction strength $\beta\epsilon_w^{(Y)}=1.5$. The corresponding height profile is displayed in Fig.~\ref{comparebubbleprofile}.} \label{fig:press_components} \end{figure} \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig13.pdf} \caption{\label{leftright} A comparison of two equilibrium vapour nanobubble profiles on a heterogeneous surface with position dependent binding potential \eqref{eq:x_var_g}. The external potential due to the wall has attraction strength $\beta\epsilon_w^{(Y)}=2.1$ on the right half of the system and $\beta\epsilon_w^{(Y)}=1.8$ on the left half. The total volume of vapour in the system is the same in both cases.} \end{figure} As an example of the type of multiscale interfacial phenomenon that our coarse grained model can be used to describe, we compute vapour nanobubble height profiles on a patterned heterogeneous surface. This consists of a surface divided into two regions with a different wettability on each of the two halves of the surface. We calculate the free energies for nanobubbles on each half, and from this we are able to determine the relative probabilities for finding vapour nanobubbles on each type of surface. We define our position dependent binding potential as \begin{equation}\label{eq:x_var_g} g(x,h)=g_l(h)(1-f(x))+g_r(h)f(x), \end{equation} where the smooth switching function \begin{align} f(x) =& \frac{1}{2}\left[\tanh\left(\frac{x-L/2}{\mathcal{W}}\right) -\tanh\left(\frac{x-L}{\mathcal{W}}\right)\right]\nonumber\\ &+ \frac{1}{2}\left[\tanh\left(\frac{x+L/2}{\mathcal{W}}\right) -\tanh\left(\frac{x}{\mathcal{W}}\right)\right], \end{align} where $\mathcal{W}=\sigma$ determines the width of the smooth transition zone between the two halves of the surface. This function also satisfies our periodic boundary conditions. $g_l(h)$ and $g_r(h)$ are the binding potentials on the left and right hand halves of the surface, respectively. These are calculated for the Yukawa wall with $\lambda_{w}^{(Y)}/\sigma=1$. On the right we have $\beta\epsilon_{w}^{(Y)}=2.1$, which represents a more solvophilic surface, whilst on the left we have a lower attraction parameter, $\beta\epsilon_{w}^{(Y)}=1.8$, which represents a more solvophobic surface. In Fig.~\ref{leftright} we display the height profiles for two different nanobubbles having the same volume $V$ but each centred on the two different halves of the system. The total domain length is $L=600\sigma$. The initial condition used to calculate each of these has the Gaussian bump centred at either $x=L/4$ or $x=3L/4$, in order to locate the centres of the final equilibrium nanobubbles at these points. The left hand vapour nanobubble on the less attractive wall (smaller $\epsilon_{w}^{(Y)}$) has the lower free energy. The free energy of the whole system $F$ is calculated using Eq.~\eqref{IH} and in Fig.~\ref{energy}{(a)} we display results for $F$ calculated as a function of $V$. In this figure these results are compared with those from a simple macroscopic (capillarity) approximation, described below. Using this data, in {Fig.~\ref{energy}(b) we plot the quantity $\beta(F_r-F_l)$} as a function of $V$, where $F_l$ is the free energy for the nanobubble on the left and $F_r$ when it is on the right. Since the probability of a given state $i$ occurring $P_i\propto e^{-\beta F_i}$, we therefore have that the ratio of the probabilities for finding the nanobubble on the two different halves of the system $P_r/P_l=e^{-\beta(F_r-F_l)}$, {i.e.\ the exponential of minus the quantity displayed in Fig.~\ref{energy}(b) is the relative probability}. Since the left half of the surface is more solvophobic, we have $P_l>P_r$, and as the size of the nanobubbles increases, the probability of finding such a nanobubble on the more solvophobic half of the system becomes much more likely, with the relative probability, $P_l\gg P_r$. Note that the curves in Fig.~\ref{energy} end on the left at a finite value of the volume $V$. This is because when the volume of vapour in the system is less than the end point value, the system can lower the total system free energy by having a uniform film thickness everywhere, at a value shifted slightly from the value at the minimum of $g(h)$, rather than by having most of the system with $h$ at the minimum of $g(h)$ but also retaining a bubble which has a larger interfacial contribution from curvature. \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig14a.pdf} \includegraphics[width=1.\columnwidth]{Fig14b.pdf} \caption{\label{energy}{In panel (a) we display the} free energy $F$ as a function of the vapour nanobubble volume $V$, for a heterogeneous system with wall attraction $\beta\epsilon_{w}^{(Y)}=2.1$ on the right half of the surface and $\beta\epsilon_{w}^{(Y)}=1.8$ on the left. The labels ``right'' and ``left'' in the key denote on which side of the system the nanobubble is located -- c.f.\ Fig.~\ref{leftright}. We compare results calculated from Eq.~\eqref{IH} with the binding potentials obtained from DFT, which are labelled ``DFT+IH'' with results from a simple macroscopic approximation \eqref{eq:approx}, labelled ``approx.''. {In panel (b)} we plot the quantity {${\beta(F_r-F_l)}$ as a function of $V$. The exponential $e^{-\beta(F_r-F_l)}$} gives the ratio of the probabilities $P_r/P_l$ of finding the nanobubble on the two sides. Since the left half of the surface is more solvophobic, we have $P_l>P_r$.} \end{figure} The macroscopic (capillarity) approximation that we compare our results with consists of setting the height profile of the vapour nanobubble to be an analytic piecewise function of $x$. We assume that outside of the nanobubble the film height is uniform: in the left half of the system we set $h(x)=h_l$, where $h_l$ is the value at the minimum of the binding potential $g_l(h)$ and in the right half we set $h(x)=h_r$, where $h_r$ is the value at the minimum of $g_r(h)$. For the nanobubble itself, we assume the height profile is the {arc of a circle $h(x)=h_{circ}(x)=h_c+\sqrt{R^2-(x-x_c)^2}$, where $h_c$, $x_c$ and $R$ are constant coefficients to be determined that depend on the size and location of the nanobubble}. If we denote the locations of the two nanobubble contact lines to be $x=A$ and $x=A+w$, i.e.\ $w$ is the width of the nanobubble, then {$x_c=A+w/2$ and} we must have that at these two points, the height profile is continuous. So, when the nanobubble is on the left we have $h(A)=h(A+w)=h_l$ and when it is on the right, $h(A)=h(A+w)=h_r$. The second condition that we apply on the {circular arc part of the nanobubble profile is that the slope at both ends should be equal to the tangent of the contact angle, $h'(A)=-h'(A+w)=-\tan\theta$. With these conditions, it is straightforward to write the coefficients $R$ and $h_c$} as functions of $A$ and $w$. When the nanobubble is on the left hand side, the volume (area under the profile) is: \begin{equation} V = h_l\left(\frac{L}{2}-w\right)+h_r\frac{L}{2}+\int_{A}^{A+w} h_{circ}(x) \mathrm{d}x, \end{equation} with an analogous formula for when it is on the right. This gives us an expression for $V$ as a function of $w$. Or, equivalently, we can vary $w$ and still obtain a series of nanobubble profiles for various values of $V$. Using this height profile we can also obtain an approximation for the free energy $F$. The surface tension contribution depends on the length of the interface. This is easy to get for the straight line pieces and for the {circular} nanobubble section it depends on the arc length \begin{equation} s=\int_{A}^{A+w}\sqrt{(1+h'(x)^2)} \mathrm{d}x, \end{equation} which is also straightforward to evaluate. We assume that there is only a contribution to $F$ from the binding potential when the height profile is at the value at the minimum of $g(h)$. Putting all this together we obtain the following estimate for the total free energy of the system when the vapour nanobubble is on the left hand side of the system \begin{equation} F_{\textrm{IH}}^{\textrm{approx.}}=g_l(h_l)\left(\frac{L}{2}-w\right)+g_r(h_r)\frac{L}{2}+\gamma_{lv}(s+L-w), \label{eq:approx} \end{equation} and an analogous expression when the nanobubble is on the right. The results plotted in Figs.~\ref{energy} labelled ``approx.'' are obtained using Eq.~\eqref{eq:approx}. We see that there is fairly good agreement in Fig.~\ref{energy}{(a)} between Eq.~\eqref{eq:approx} and the results from the full minimisation of Eq.~\eqref{IH}; the difference is less than {1\%. However, as Fig.~\ref{energy}(b) illustrates}, even such small errors can make {more of a difference when calculating quantities like $(F_r-F_l)$ and so also the ratio $P_r/P_l=e^{-\beta(F_r-F_l)}$}, demonstrating the importance of getting details right for this sort of calculation. {This is particularly important for small nanobubbles. For example, when the nanobubble volume $V\sigma^2=1400$, we have $e^{-\beta(F_r-F_l)}=e^{-5.9}\approx0.0027$ via Eq.~\eqref{eq:approx}, but from the full minimisation of Eq.~\eqref{IH} we obtain $e^{-\beta(F_r-F_l)}=e^{-5.4}\approx0.0045$; i.e.\ there is a 60\% difference between the two results for the relative probabilities $P_r/P_l$. Another important detail for these types of calculations} is getting correctly the true overall shape of $g(h)$, since this makes a contribution to $F$, which is neglected in Eq.~\eqref{eq:approx}, coming from the contact line region of the nanobubble. Another source of error in Eq.~\eqref{eq:approx} worth highlighting is that we have assumed that the heights of the film away from the nanobubble are the values at the exact minima of the binding potentials $g_l$ and $g_r$. Consequently, any additional vapour volume in the system is assumed to be in the nanobubble. In reality, as we see from results from minimising Eq.~\eqref{IH} and magnifying in the small $h$ region (not displayed), there is a balance between having the vapour in the small-$h$ flat layer and having it in the nanobubble. The Laplace pressure in the nanobubble makes it become a little smaller, transferring some of the vapour into the flat film and thereby raising the free energy contribution from these portions of the system. There are also further sources of error due to the assumption that the nanobubble has a {circular} shape, in particular in the region near the contact lines where it would be expected to smoothly transition to the film heights, and in the error approximating the profile's transition across the wettability gradient as a sharp step. \begin{figure}[t] \includegraphics[width=1.\columnwidth]{Fig15.pdf} \caption{The excess pressure $-\Pi-\kappa$, given by Eq.~\eqref{eq:pressure_comps}, for a range of different nanobubble volumes and for values of the wall attraction strength parameter $\beta\epsilon_w^{(Y)}$ as given in the key. See also Fig.~\ref{comparebubbleprofile}.} \label{fig:ex_press} \end{figure} {In Fig.~\ref{fig:ex_press} we display $(-\Pi-\kappa)$, the excess pressure due to the presence of the nanobubble, given by Eq.~\eqref{eq:pressure_comps}, for a range of different nanobubble volumes and for a range of different values of the wall attraction strength parameter $\beta\epsilon_w^{(Y)}$. Recall that the bulk fluid pressure is $\beta\sigma^3p=0.026$, so the figure shows that these excess pressures are comparable in magnitude. For values of $\beta\epsilon_w^{(Y)}$ smaller than that at the drying transition we see that for $V\to\infty$, $(-\Pi-\kappa)\to0$ from below, whilst for $\beta\epsilon_w^{(Y)}$ greater than that at the drying transition, then $(-\Pi-\kappa)\to0$ from above.} \section{Concluding remarks}\label{sec:conc} In this paper we have presented results for the binding potential $g(h)$ for films of vapour intruding between a bulk liquid and flat planar surfaces and used the calculated $g(h)$ to determine film height profiles for vapour nanobubbles on the surface. The binding potentials are calculated using a microscopic DFT, applying the fictitious external potential method developed by Hughes \emph{et al.}\ \cite{hughes2015liquid, hughes2017influence}, which is based on calculating a series of constrained fluid density profiles at the wall with varying thickness (adsorption). We see from our results in e.g.\ Fig.~\ref{comparelj}{(a)} that despite the resulting binding potentials being rather smooth and featureless, details such as the width of the minimum and the form of the decay in $g(h)$ do depend crucially on the details of the microscopic interactions. We also see from our estimates of the relative probabilities of finding a nanobubble on different parts of a heterogenous surface displayed in inset to Fig.~\ref{energy} that having a reliable approximation for $g(h)$ is necessary for the estimates to be accurate. It is clear that to correctly describe vapour nanobubbles one must have an accurate binding potential. Here, we have used a microscopic DFT based on FMT to determine $g(h)$, although one could instead use computer simulations \cite{macdowell2011computer, tretyakov2013parameter, benet2016premelting, jain2019using}. However, the DFT calculations are computationally much faster. The overall coarse-graining procedure developed here, building on the work in Refs.~\cite{hughes2015liquid, hughes2017influence}, allows us to determine multi-scale properties of fluids at interfaces. The approach allows to go from the microscopic features of the molecular interactions and go up in length scales to describe mesoscopic aspects such as nanobubbles on surfaces. Our approach has here been applied to a simple model heterogenous surface, but it could also be applied in a straight-forward manner to more complex surfaces and structures, since, for example, the contributions to $g(h)$ from surface curvature are understood \cite{stewart2005critical, stewart2005wetting}. In the work presented here we have assumed that it is just the vapour phase inside the nanobubbles. However, as mentioned in the introduction, perhaps the more experimentally relevant situation is when the nanobubbles also contain dissolved gas (i.e.\ air) molecules that have come out from solution in the bulk liquid. In Ref.~\cite{svetovoy2016effect} a theory for this situation is developed. The authors argue that one should set the binding potential in Eq.\ \eqref{IH} to be the potential $U(h)=w(h)-w(h_c)-\beta\mu_g h p_g(h)$, where $w(h)$ is the ``bare'' binding potential between the wall and the bulk liquid and the last term is the contribution from the gas in the nanobubble, that has chemical potential $\mu_g$ and pressure $p_g(h)$, which is assumed to be related to the disjoining pressure and given by the ideal-gas equation of state. Whilst this approach has the advantage of being relatively simple, one could also include the effects of dissolved gas in the present approach by treating the system as a binary mixture and then using a DFT for the mixture to determine the influence of different amounts of the gas at the interface on $g(h)$. Such a DFT approach would, of course, include the effects of the gas compressibility, which are believed to be important for such surface nanobubbles. Finally, we should remark that some of the values of the wall attraction parameter $\epsilon_w^{(Y)}$ that we use are rather small, corresponding to very solvophobic surfaces. Considering simple molecular liquids at interfaces, such values are perhaps somewhat unrealistic, being weaker than one would typically expect to find. For example, for water on hydrophobic surfaces such wax or Teflon, one does not see contact angles significantly greater than 130$^\circ$ \cite{evans2017drying}. However, at (patterned) superhydrophobic surfaces much larger contact angles are possible, so studying the behaviour of the model right up to the drying transition is relevant to such systems. Also, the model fluid considered here is also a reasonably good model for certain colloidal suspensions (e.g.\ colloid-polymer mixtures \cite{dijkstra1999phase}) and for such systems even purely repulsive wall potentials are possible, when e.g.\ polymers are grafted onto the walls. The work here is highly relevant to such colloidal systems. \section*{Acknowledgements} We gratefully acknowledge Uwe Thiele for stimulating discussions and also for hosting AJA in M\"unster where some of this was written. DNS acknowledges support via EPSRC grant number EP/R006520/1. { \section*{Appendix} {In Table \ref{table:nonlin} we give values of the coefficients in the binding potential $g(\Gamma)$ in Eq.~\eqref{eq:fit_g}, obtained by fitting to the results from DFT for a range of different values of the parameters in the wall potential, for the fluid with $\lambda=\sigma$ and $\beta\epsilon=0.5$.} } \begin{table*}[ht] \caption{{The parameter values $a_1$, $a_2$, ..., $a_8$ and $l_0$ in the binding potential $g(\Gamma)$ in Eq.~\eqref{eq:fit_g}, obtained from fitting to the data calculated using the DFT, for the various different wall potentials given in Eqs.~\eqref{Yukawall}--\eqref{expform}. The attraction and range parameters $\epsilon_w^{(i)}$ and $\lambda_w^{(i)}$ in these potentials are also given below. The first column refers to the number of the figure above in which the binding potentials are displayed.}} \centering \begin{tabular}{c| c| c| c| c| c| c| c| c| c| c| c| cl} \hline\hline Figure & wall type & $\beta\epsilon_w^{(i)}$&$\lambda_w^{(i)}/\sigma$ &$a_1$&$a_2$&$a_3$&$a_4$&$a_5$&$a_6$&$a_7$&$a_8$&$l_0$ \\ [0.5ex] \hline 3 & Y & 1.817 &1& -0.102902&-1.52976&-7.19867&45.6063&-82.5011&64.6922&-18.9215&0&1.13494\\ 6(a) & Y & 0 &1& 0.436017&1.56668&-5.80142& 10.5037&-10.2276&5.2632&-1.12803&0&0.764648\\ 6(a) & Y &0.3&1&0.188742&1.01604&-1.10085&-1.90374&6.14653&-5.55662&1.72492&0&0.834599\\ 6(a) & Y &0.6&1&0.0636616&-0.0223561&3.88825&-12.2983&17.6613&-12.1129&3.23861&0&0.898494\\ 6(a) & Y &0.9&1&-0.00913047&-1.07813&8.16947&-20.2538&25.6447&-16.2729&4.12271&0&0.914339\\ 6(a) & Y &1.2&1&-0.103283&-2.23702&13.5188&-31.1874&37.6513&-23.154&5.74073&0&0.894094\\ 6(a) & Y &1.5&1&-0.316933&-3.50244&22.3324&-53.6347&66.6376&-42.0627&10.6847&0&0.832845\\ 6(a) & Y &1.8&1&-0.0998242&-1.4847&-6.998&44.2127&-79.8251&62.5049&-18.2605&0&1.13798\\ 6(a) & Y &2.1&1&-0.187165&-0.875099&-20.9172&99.1027&-170.089&130.938&-38.0072&0&1.13219\\ 6(b)& LJ &0&1&0.412147&1.61894&-5.73846&10.0637&-9.48287&4.71324&-0.974014&0&0.775053\\ 6(b)& LJ &0.1&1&0.374952&0.774098&-3.02401&5.77044&-5.69195&2.90088&-0.607194&0&0.695483\\ 6(b)& LJ & 0.2&1&-0.0259831&0.448125&-2.49345&8.72886&-13.2268&9.67743&-2.72339&0&1.32478\\ 6(b)& LJ &0.3&1&-0.110418&0.500301&-8.71096&38.0178&-72.4731&72.5622&-37.2913&7.78349&1.08473\\ 6(b)& LJ & 0.4&1&-0.426825&0.267704&-21.7157&122.223&-279.301&326.179&-193.284&46.2408&0.917036\\ 7(a)& Y & 1.82 & 1&-0.10833&-1.40042&-7.99612&47.6637&-85.1519&66.386&-19.3495&0&1.13885 \\ 7(a)& LJ & 0.4&1&-0.426825&0.267704&-21.7157&122.223&-279.301&326.179&-193.284&46.2408&0.917036\\ 7(a)& G & 2.5 &1& -0.202165&-1.65283&-17.9331&135.977&-352.077&448.15&-283.891&71.6529&0.983802\\ 7(a)& E &1.813&1& -1.09031&-2.16095&30.576&-99.8449&165.53&-151.694&73.1886&-14.5204&0.823677\\ 9& E &1&0.1&0.422319&1.45854&-4.35437&3.93238&5.11002&-14.2398&11.6767&-3.39351&0.77295\\ 9& E &1&0.3&0.411692&1.11381&-2.86337&0.299185&10.7624&-19.7299&14.683&-4.09718&0.765508\\ 9& E &1&0.5&0.274977&0.375024&2.8439&-17.8841&43.0349&-52.6678&32.7557&-8.2269&0.788426\\ 9& E &1&0.7&0.0666694&-0.635408&10.4577&-39.9909&78.0552&-84.3057&48.0902&-11.3238&0.855997\\ 9& E &1&0.9&-0.0977131&-1.83569&16.2998&-50.1598&83.5545&-79.3539&40.6478&-8.74109&0.943813\\ 9& E &1&1.1&-0.683713&0.654502&11.6287&-50.4985&99.458&-105.867&59.1351&-13.6179&0.836162\\ 9& E &1&1.3&-0.868291&-0.0780018&10.9717&-29.3915&32.1351&-11.0863&-4.90455&3.32424&0.932054\\ 9& E &1&1.5&-1.0713&0.527968&-0.0153676&26.0912&-95.6713&142.44&-98.7294&26.4225&1.01572\\ 9& E &1&1.8&-1.43689&3.52772&-29.5133&143.46&-330.821&397.535&-242.33&59.4076&1.12934\\ 9& E &1&2.1&-1.86145&8.06742&-64.7503&260.356&-529.595&580.619&-329.22&76.0464&1.23315\\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table*}
{ "timestamp": "2019-04-16T02:08:03", "yymm": "1904", "arxiv_id": "1904.06497", "language": "en", "url": "https://arxiv.org/abs/1904.06497" }
\subsection{Partial-synchronous} In the asynchronous settings we assume that less than one third of the voting power is in malicious hands. If we assume some sort of synchronous, where there is an upper bound on the time it takes a message that was sent by a valid bank to reach another valid bank, then the system can be secured against even bigger proportion of malicious voting power. Under such settings, when a valid bank receives a node from another bank, it can send this node to all other banks, and then wait that upper bound on messages transmission time (twice) to make sure that no other valid bank has received a conflicting node. Only then it adds the received node to its graph. \subsection{Practical} A final word concerning the small latency: Note that the communication between banks is a direct communication - each bank should know the IP of the other banks. This is not a must (there is no importance from where a message signed by another bank has been received, as long as it is properly signed), but it can save important time, and there is no reason why it won't be that way. Courtesy among banks: 1) if you supported a block of another bank, wait for it to acknowledge this support before you support a newer block of its. This is important to not giving legitimacy for ``satellite'' banks, that support only a specific (probably malicious) bank. 2) a ``smaller'' bank should first support a bigger bank's block. Small banks issue: \subsection{Forking} * when graphs diverge * when banks cannot make advancement due to non-responding or ignoring banks. However, it is still recommended to combine it with other existing ideas of reducing the load by using ``side channels'' to perform frequent and small money transfers (micro-payments), see e.g. \cite{lightning}. \subsection{Implications} The protocol and settings defined above are rather limited, and intended to provide only the minimum that is required in order to have a cryptocurrency that achieves the requirements we defined under asynchronous communications. We provide here some details and implications we ignored. The first case concerns allegedly malicious users. According to the \blockgraph\ definition, a user that submitted conflicting transactions might not be able to submit any additional transaction, which makes its money unusable. This can be seen as a punishment to such user, but it might be a too harsh punishment, as it might be an honest mistake, with no malicious intentions. To overcome it, we can allow a user to submit a ``group of transactions'' with a single sequence number. Such group won't be considered as conflicting with any subset of the transactions it contains. Another concept that wasn't discussed above is the commission. We have mentioned in the introduction that a user will pay commission for each of his issued transactions. The simplest approach is that the commission will be a fixed percentage of the transaction sum, that is given to the bank of the user that issued the transaction. Another option is to divide the commission between the user's bank (probably about an half of the commission) and the banks that supported the block at which that transaction appeared (as they all took part in the agreement process). If we want to encourage participation of as many banks as possible, then we can define the commission percentage to be variable, where this percentage gets bigger as more banks (voting power) support the block. As the issuing bank receives a fixed share of the commission, the commission it receives will be bigger as more banks support its block. This incentivizes the issuing bank to ask for the acceptance of as many banks as it can. However, sharing the commission between the banks that supported the block might cause problems if the same transaction appears in blocks of different banks, as it is not clear then according to which block we divide the commission. Such case is possible if the user submitted a transaction to his bank and got no response, so he sent it also to another bank, and as a result both banks might create blocks that contain this transaction. In order to solve this problem we can define that the commission distribution is computed only according the block of the original bank of the user. If that bank stops responding before it manages to accept its block, then the amount of commission that should have been taken remains effectively as inaccessible money.} \newcommand{\Execution}{ Let $(A,I,\mathbb{B},P)$ be a cryptocurrency system. An execution of the system is defined by the tuple $(X,Users,Admins,F,Tx,Msgs,Ac,Re)$. $X$ is a (possibly infinite) set of nodes that act as users and admins. $Users$ is a map $X\to2^{A}$ that assigns each node a set of account numbers, and $Admins$ is a map $X\to2^{\mathbb{B}}$ that assigns each node a set of admin IDs. For every $x_1,x_2\in X$, if $x_1\neq x_2$ then $Admins(x_1)$ and $Admins(x_2)$ are disjoint sets. If $a\in Users(x_1)$, then we say that~$x_1$ represents the account number~$a$. The same goes with admins. A node that represents an account number is considered a \mydef{user}. A node that represents an admin is considered an \mydef{admin}. Note that a node can be both a user and an admin. The nodes communicate by message passing. We assume a fully connected network. Users can create transactions and send them to the admins. The admins can send messages between them and accept/reject user transactions. Admins should send messages and accept/reject transactions only if they should do so by the protocol. Admins that don't do so are considered malicious. Malicious admins can perform arbitrary operations, but they cannot send messages that contain data that was digitally signed by someone else, unless they received that signed data beforehand. The same goes with accepting/rejecting transactions. $F$ is a map $X\to\{\mathbb{R}^{\geq0}\cup\infty\}$ that defines for every $x\in X$ the time when~$x$ fails by crashing.\footnote{A more practical definition should include for every node the time when it becomes active. We avoided doing so for ease of exposition.} If the time value is ``$\infty$'', the node never crashes. A node that crashed cannot create transactions, send/receive messages or accept/reject transaction after it crashed. An admin is considered valid if it is not malicious and if its representing node doesn't crash. $Tx=\{(tx,b,t)\}$ is the set of transactions that were submitted by the users, where $tx$ is a transaction (as defined in \Cref{sec:model}), $b\in\mathbb{B}$ is the admin to whom the transaction was submitted, and $t\in\{\mathbb{R}^+\cup\bot\}$ is the time that the node that represents~$b$ received~$tx$, where~$\bot$ means that it never receives it. $t$ can be $\bot$ only if either the node that represents~$b$ or some node that represents the source account of~$tx$ crashes. If~$t\neq\bot$, then both the node that represents~$b$ and some node that represents~$a$ must not crash before~$t$. $Msgs=\{(b_s,b_t,m,tSend,tReceive)\}$ is the set of messages that are sent between admins (or, more precisely, between nodes that represent admins). $b_s,b_t\in\mathbb{B}$ are the source and destination admins respectively, both must be represented by some nodes. $m$ is the message contents, ${tSend\in\mathbb{R}^+}$ is the time the message was sent, and $tReceive\in\{\mathbb{R}^+\cup\bot\}$ is the time the message was received. Assume that $x_s,x_t\in X$ are the nodes that represent $b_s$ and $b_t$ respectively. If~$x_s$ crashes at time~$t'$, then $tSend<t'$. If $tReceive=\bot$ this means that the message was never received, which is possible only if~$x_s$ or~$x_t$ crashes. Otherwise, $tReceive>tSend$ and if~$x_t$ crashes at time~$t'$ then $t'>tReceive$. $Ac=\{(b,tx,t)\}$ is the set of accepted transactions, where $b\in\mathbb{B}$ is the admin, $tx$ is a transaction that was submitted by a client and $t\in\mathbb{R}^+$ is the time that~$b$ \mydef{accepted}~$tx$. $Re$ is a similar set that describes the \mydef{rejected} transactions. Using the above definition, we can examine if a given execution satisfies the requirements defined in \Cref{sec:model}. } \subsection{Future Work} There are yet many open challenges for implementing the system presented in this paper, and further research is required. For example, we might want to spare memory, and not to remember the entire \blockgraph. The question is what can we forget, and under what conditions. Another challenge is with large number of banks. The more banks we have, the bigger the requirement for memory and bandwidth. At some point, if there will be banks that won't have strong enough hardware, and their voting power on the other hand will be significant (so they are required for the consensus process to complete), it might cause extra delay in the system. An existing downside in many cryptocurrencies including Bitcoin and the coin presented in this paper is that the system is completely transparent. I.e., every transaction that is accepted is visible to the public, so that everyone can see the source and destination accounts, and the sum of money that was transferred. Yet, the identities of the account owners are of course generally unknown. An interesting direction in our case is if we can limit the transparency to the level of banks, so that the only entities that truly know the transaction source, destination and sum of money (all together) are the banks of the issuing and receiving clients. An important concept in cryptocurrencies at which consensus is based on coin possession, such as in Proof of Stake, is that malicious entities will be financially damaged because of their acts. Otherwise, different entities (and especially strong entities, that posses a lot of coin) might maliciously attempt to maneuver the currency, without getting harmed. This is the famous \mydef{nothing at stake} problem. Note that this is never true in practice, because an unstable coin will have a lower value, so such entities that posses a lot of coin have an interest in keeping the coin credible. Yet, if the coin value is decreased, this will harm also non malicious parties. Thus, the best practice is indeed financially damaging those malicious entities. Note that in our case banks should have a private account, to where they receive the commission on the users' accepted transactions. A best practice will be to define that transactions that wish to transfer private money of a bank will be allowed to appear only in \startB s of that banks itself. Recall that a malicious bank (that made a malicious act) won't mange to create new blocks, as it cannot reference nodes of other banks that are already aware to its malice so it cannot gain the required support for a block. Thus, a malicious bank won't be able to use its private money (and once it turned out to be malicious, it won't receive any more commission). If we can incentivize a bank to keep money in its private account, this can be used as means to make sure the bank follows the protocol, as if it won't, then it will lose that money. This will be similar to real banks, that by regulatory requirements, in order to ensure their stability, must have some capital of their own (such stability is not relevant to our banks, as they cannot use their clients' money, as opposed to real world banks). The most reasonable way to incentivize the banks to keep private money is by limiting their voting power in case they don't have enough private money. I.e., we can define a percentage of private money out of the voting power of a bank that the bank must have in order to use its full voting power. If it doesn't have the required percentage, its voting power will be decreased proportionally. A more research is required in order to make sure that the system still complies with the agreement and termination requirements. \subsection{Banking and Democracy} Recall that the administrators are the ones that manage the consensus, concerning the existing balance and accepted transactions. However, we claim that the consensus should be between the users. I.e., it is the users who should be interested in the consensus. Without consensus there is no value to the coins they hold, as there is no agreement on the amount of coins each user holds. The more coins you hold, the more responsibility you have for the cryptocurrency's future, as you will lose more if its value will drop. The way we can coordinate between all the different users is by means of democracy -- letting the majority of the ``people'' decide. However, the ``people'' in our case will not be the users, bur rather the coins. As coins belong to users, we will provide each user with voting power that is proportional to the amount of coins he owns. The problem is that the users are too numerous, and asking all of them to vote on each decision (i.e., transaction) is not practical. Recall that in a democracy the people don't need to accept every rule, they just need to choose representatives to make the decisions for them (or on their behalf). Thus, our users simply need to choose representatives. The most trivial representatives are the administrators, whose job is to maintain the consensus. We shall now discuss the identities of the administrators. In the real world we don't keep the money ourselves\hideC{ (at least most of our money)}. Instead, we let someone -- the bank, to keep it for us. When we want to use this money we simply address our bank\hideC{ (either directly or indirectly\footnote{Most often we address our credit card company, and they address our bank.})}. In cryptocurrency, on the other hand, the amount of money you have is decided by agreement between the admins. The advantage in such a scheme is that you are not depended on a single bank but rather on the agreement between multiple admins. Moreover, the operations of the admins are completely transparent and, in fact, each of us can become an admin (or at least can gather the information that they get) and check for himself the correctness of the consensus. Yet, there is some convenience in the banks scheme where each user has a single address to all of his requests\hideC{ (convenient both for the user and for the entire system)}. Our solution merges the responsibilities of everyday banks and cryprocurrency admins by making the admins to function as banks. Each user will choose a bank (an administrator) where his money will be deposited, and whom he should directly address for every request. Of course the user must also have the option to switch banks, so he won't lose his money in case his bank fails or otherwise ignores his requests. A bank, just like a user, will have a pair of secret and public keys. The user's account number will be the combination of his bank's public key and his own public key. \hideC{The bank's public key can be seen as the ``branch number'', while the user's public key is the ``inner account number'' in that specific bank.} Let's assume I ask my bank to transfer money from my account to another. As we are living in a democracy, the transfer must be accepted by the holders of the majority of the coin\hideC{ (i.e., a group of coin holders that together possess a majority of the money must accept this transfer)}. In our approach, each bank represents its clients and `votes' on their behalf. Roughly speaking, each bank gets its voting power according to the sum of money in the user accounts it manages. Once the banks agree on a transaction, we can see it as if the money holders themselves agreed on that transaction\hideC{ (as each money holder delegated his voting power to his bank)}. {\quad }We shall now briefly describe the implemented protocol. In blockchain based coins there is a single global ledger (the \mydef{blockchain}) that lists all the transactions, and all the admins (should) agree on this ledger. In our system every admin shall have a private blockchain of its own, that lists (mostly) the transactions its clients committed. That blockchain might also include a transaction of a user of another bank, in case that user asks to leave his original bank and move to this bank. If we want to compute the balance of a client, observing only the blockchain of his bank is not enough, as it doesn't list money that he receives from clients of other banks. The information about such money transfers is found in the blockchains of the other banks (the banks whose clients transferred that money). As banks must be able to compute such account balances, they must hold all of the existing blockchains. In fact, each block in a given blockchain will contain pointers to blocks in other blockchains. More information appears in \Cref{sec:settings}. \hideD{ So, when I send a transaction to my bank, my bank will check this transaction to make sure that I have the required money to spend, then it will probably group it with transactions of other users into a block, and it will chain this block to its private blockchain. Next, my bank will send this new block to all the other banks, in order to receive their acceptance for the block. Once a majority of banks (by means of voting power) sent their acceptance for the block, my bank has the proof that it should be accepted. More information concerning the exact protocol appears in \Cref{sec:prot}.} The presented approach includes many advantages. Among else it presents a completely decentralized and trustless system, an agreement mechanism without superfluous energy consumption and a deterministic cryptocurrency that operates over asynchronous channels and (theoretically) achieves low latency. From the user side, just as in real life, you address all of your requests directly to your bank. The banks have incentives to give their clients good service, for otherwise the clients will move to other banks (and less clients means less income). \section{Introduction}\label{sec:intro} \input{Introduction} \section{Related Works}\label{sec:related} \input{RelatedWorks} \section{Model}\label{sec:model} \input{Model} \section{Solution}\label{sec:solution} \input{Solution} \section{Conclusions}\label{sec:conclusions} \input{Conclusions} \bibliographystyle{abbrv} \subsection{The Blockgraph}\label{sec:settings} \input{Settings} \subsection{The protocol}\label{sec:prot} \input{Protocol}
{ "timestamp": "2019-09-26T02:08:55", "yymm": "1904", "arxiv_id": "1904.06522", "language": "en", "url": "https://arxiv.org/abs/1904.06522" }
\section{Introduction} The stellarator is a promising approach to magnetic confinement, as it does not require plasma current to produce rotational transform and thus is inherently steady-state and stable to current-driven modes. However, collisionless trajectories are not guaranteed to be confined in three-dimensional geometry as they are in axisymmetric systems \citep{Gibson1967,Helander2014}. This can lead to poor confinement of energetic particles and increased neoclassical transport. A possible solution is the application of numerical optimization techniques to carefully tailor the magnetic geometry for improved confinement properties. One of the first demonstrations of this technique was in the design of the Wendelstein 7-X (W7-X) stellarator \citep{Grieger1992,Lotz1991}, which was optimized for small neoclassical transport in the $1/\nu$ regime and for small bootstrap current, in addition to several other physics criteria. Another approach to improve confinement in stellarators is to obtain a configuration whose magnetic field strength exhibits a symmetry direction when expressed in Boozer coordinates, known as quasi-symmetry \citep{Nuhrenberg1988}. This leads to a conserved canonical momentum of the guiding center motion such that neoclassical properties are similar to those in a tokamak. However, perfect quasi-symmetry can never been achieved globally in practice \citep{Garren1991,Landreman2018a}, and it is often desirable to include symmetry-breaking components of the magnetic field strength in consideration of other design parameters, such as magneto-hydrodynamic (MHD) stability and energetic particle confinement \citep{Nelson2003,Henneberg2019}. As one must allow for breaking of quasi-symmetry, it remains essential to include a measure of neoclassical transport such that the symmetry-breaking harmonics of the field strength do not significantly degrade the confinement. Neoclassical transport is governed by solutions of the drift kinetic equation, (DKE) \eqref{eq:DKE}, from which moments (e.g. radial fluxes and bootstrap current) are computed. The DKE local to a flux surface can be solved numerically \citep{Landreman2014,Belli2015}. However, this four-dimensional problem is expensive to solve within an optimization loop, especially in low-collisionality regimes for which increased pitch-angle resolution is required to resolve the collisional boundary layer. Therefore, it may be desirable to consider an analytic reduction of the DKE. Under the assumption of low collisionality, a bounce-averaged DKE can be considered \citep{Beidler1995,Calvo2018}. While bounce-averaging can significantly reduce the computational cost by decreasing the spatial dimensionality, this approach typically requires restrictions on the geometry, such as closeness to omnigeneity or a model magnetic field. Additional reduction of the DKE can be made in low collisionality regimes, resulting in semi-analytic expressions. For example the effective ripple, $\epsilon_{\text{eff}}$ \citep{Nemov1999}, quantifies the geometric dependence of the $1/\nu$ radial transport and has been widely used during optimization studies \citep{Zarnstorff2001,Ku2008,Henneberg2019}. This model, though, assumes very small $E_r$, which is not always an experimentally-relevant regime. A low-collisionality semi-analytic bootstrap current model \citep{Shaing1989} is also commonly adopted for stellarator design \citep{Beidler1990,Hirshman1999}. However, this analytic expression is known to to be ill-behaved near rational surfaces. Furthermore, benchmarks with numerical solutions of the DKE in the low-collisionality limit have been shown to differ significantly from the semi-analytic model \citep{Beidler2011,Kernbichler2016}. Any analytic reduction of the DKE implies additional assumptions, such as on the collisionality, size of $E_r$, or on the magnetic geometry. Due to the limitations of bounce-averaged and semi-analytic models, there are benefits to computing neoclassical quantities using numerical solutions to the DKE without approximation. With the numerical methods currently used for stellarator optimization, this approach becomes computationally challenging within an optimization loop. Due to their fully three-dimensional nature, optimization of stellarator geometry requires navigation through high-dimensional spaces, such as the space of the shape of the outer boundary of the plasma or the shapes of electromagnetic coils. The number of parameters required to describe these spaces, $N$, is often quite large ($\mathcal{O}(10^2)$). Knowledge of the gradient of the objective function with respect to these parameters can greatly improve the convergence to a local minimum. Once a descent direction is identified, each iteration reduces to a one-dimensional line search. Gradient-based optimization with the Levenberg-Marquardt algorithm in the STELLOPT code \citep{Strickler2004} has been widely-used in the stellarator community and led to the design of NCSX \citep{Reiman1999}. Although derivative information is valuable, numerically computing the derivative of a figure of merit $f$ (for example, with finite difference derivatives) can be prohibitively expensive, as $f$ must be evaluated $\mathcal{O}(N)$ times. For neoclassical optimization, this implies solving the DKE $\mathcal{O}(N)$ times; thus including finite-collisionality neoclassical quantities in the objective function is often impractical. In this work we describe an adjoint method for neoclassical optimization. With this method, the computation of the derivatives of $f$ with respect to $N$ parameters has cost comparable to solving the DKE twice, thus making the inclusion of these quantities possible within an optimization loop. In this work we obtain derivatives of neoclassical figures of merit with respect to local geometric parameters on a surface rather than the outer boundary or coil shapes. However, the geometric derivatives we compute provide an important step toward adjoint-based optimization of MHD equilibria, as discussed in section \ref{sec:equilibria_opt}. Adjoint methods have been applied in many fields including aerodynamic engineering and computational fluid dynamics \citep{Pironneau1974,Glowinski1975}, geophysics \citep{Plessix2006,Fichtner2006}, structural engineering \citep{Allaire2005}, and tokamak divertor design \citep{Dekeyser2014a,Dekeyser2014c,Dekeyser2014b}. They have only recently been implemented for stellarator design, namely for the design of coil shapes \citep{Paul2018} and efficiently computing shape gradients for MHD equilibria \citep{Antonsen2019}. The numerical method is quite general and has the potential to greatly impact many inverse design problems in magnetic confinement fusion. In section \ref{sec:dke} we provide an overview of the numerical solution of the DKE local to a flux surface. In section \ref{sec:adjoint_approach} the adjoint neoclassical method is described. Two approaches to the adjoint method, termed continuous and discrete, are presented, and their implementation and benchmarks are discussed in section \ref{sec:implementation}. The adjoint method is used to compute derivatives of moments of the neoclassical distribution function with respect to local geometric quantities. The derivative information can be used to identify regions of increased sensitivity to magnetic perturbations, as discussed in section \ref{sec:local_sensitivity}. We demonstrate adjoint-based optimization in section \ref{sec:vacuum_opt} by locally modifying the field strength on a flux surface. A discussion of the application of this method for optimization of MHD equilibria is presented in \ref{sec:equilibria_opt}. Finally, the adjoint method is applied to accelerate the calculation of the ambipolar electric field in section \ref{sec:ambipolarity}. \section{Drift kinetic equation} \label{sec:dke} The Stellarator Fokker-Planck Iterative Neoclassical Solver (SFINCS) code \citep{Landreman2014} solves the drift kinetic equation, \begin{equation} \left(v_{||} \bm{b} + \bm{v}_E \right) \cdot\nabla f_{1s} - C_s(f_{1s}) = -\bm{v}_{\text{m}s} \cdot \nabla \psi \partder{f_{Ms}}{\psi}, \label{eq:DKE} \end{equation} for general stellarator geometry. Here $\bm{b} = \bm{B}/B$ is a unit vector in the direction of the magnetic field, $v_{||} = \bm{v}\cdot \bm{b}$ is the parallel component of the velocity, and $2\pi \psi$ is the toroidal flux. The Fokker-Planck collision operator is $C_s(f_{1s})$, linearized about a Maxwellian $f_{Ms} = n_sv_{ts}^{-3} \pi^{-3/2} e^{-v^2/v_{ts}^2}$ where $v_{ts} = \sqrt{2T_s/m_s}$ is the thermal speed, $n_s$ is the density, $T_s$ is the temperature, $m_s$ is the mass, and the subscript indicates species. In \eqref{eq:DKE}, derivatives are performed holding $W_s = m_s v^2/2 +q_s \Phi$ and $\mu = v_{\perp}^2/2B$ fixed, where $v = \sqrt{\bm{v} \cdot \bm{v}}$ is the magnitude of velocity, $\Phi$ is the electrostatic potential, $v_{\perp} = \sqrt{v^2 - v_{||}^2}$ is the perpendicular velocity, and $q_s$ is the charge. The radial magnetic drift is \begin{equation} \bm{v}_{\text{m}s}\cdot \nabla \psi = \frac{m_s }{q_s B^2} \left( v_{||}^2 + \frac{v_{\perp}^2}{2} \right) \bm{b} \times \nabla B \cdot \nabla \psi, \label{eq:radial_drift} \end{equation} assuming a magnetic field in MHD force balance, and $\bm{v}_E$ is the $\bm{E} \times \bm{B}$ velocity \begin{equation} \bm{v}_E = \frac{\bm{B} \times \nabla \Phi}{B^2}. \end{equation} Throughout we assume $\Phi=\Phi(\psi)$ such that \eqref{eq:DKE} is linear. In \eqref{eq:DKE} we will not consider the effect of inductive electric fields, as this can be assumed to be small for stellarators without inductive current drive. We also do not consider the effects of magnetic drifts tangential to the flux surface in \eqref{eq:DKE}, as these only become important when $E_r$ is small \citep{Paul2017}. SFINCS solves \eqref{eq:DKE} locally on a flux surface $\psi$, thus it is four-dimensional. The SFINCS coordinates include two angles (poloidal angle $\theta$ and toroidal angle $\zeta$), speed $x_s = v/v_{ts}$, and pitch angle $\xi_s = v_{||}/v$. Specifics about the implementation of \eqref{eq:DKE} in the SFINCS code are described in appendix \ref{app:trajectory_models}. We will refer to two choices of implementation, the full trajectory model and the DKES trajectory model. The full trajectory model maintains $\mu$ conservation as radial coupling (terms involving $\partial f_{1s}/\partial \psi$) is dropped. While the DKES model does not conserve $\mu$ when the radial electric field $E_r \neq 0$, the adjoint operator under the DKES model takes a particularly simple form as discussed in section \ref{sec:continuous}. This model also does not introduce any unphysical constraints on the distribution function when $E_r = 0$, as occurs for the full trajectory model \citep{Landreman2014}. These constraints motivate the introduction of particle and heat sources, which are discussed in the following section. We will discuss some of the details of the implementation of the DKE in the SFINCS code as these need to be considered in arriving at the adjoint equation. However, the adjoint neoclassical approach is quite general and could be implemented in other drift kinetic codes with slight modification. From solutions of \eqref{eq:DKE}, several neoclassical quantities are computed, including the flux surface averaged parallel flow, \begin{align} V_{||,s} = \frac{\left\langle B \int d^3 v \, f_{1s} v_{||} \right\rangle_{\psi}}{n_s \langle B^2 \rangle_{\psi}^{1/2}}, \label{eq:parallel_flow} \end{align} the radial particle flux, \begin{align} \Gamma_s = \left \langle \int d^3 v \, \left(\bm{v}_{\text{m}s} \cdot \nabla \rho \right) f_{1s} \right \rangle_{\psi}, \label{eq:particle_flux} \end{align} and the radial heat flux (sometimes referred to as an energy flux), \begin{align} Q_s = \left \langle \int d^3 v \, \frac{m_sv^2}{2} \left(\bm{v}_{\text{m}s} \cdot \nabla \rho \right) f_{1s} \right \rangle_{\psi}. \label{eq:heat_flux} \end{align} We will also consider species-summed quantities including the bootstrap current, $J_b = \sum_s q_s n_s V_{||,s}$, the radial current, $J_r = \sum_s q_s \Gamma_s$, and the total heat flux, $Q_{\text{tot}} = \sum_s Q_s$. Here the effective normalized radius is $\rho = \sqrt{\psi/\psi_0}$, where $2\pi \psi_0$ is the toroidal flux at the boundary. \subsection{Sources and constraints} \label{sec:sources} To avoid unphysical constraints on $f_{1s}$ implied by the moment equations of \eqref{eq:DKE} in the presence of a non-zero $E_r$ \citep{Landreman2014}, particle and heat sources are added to the DKE \eqref{eq:dke_model}, \begin{gather} \mathbb{L}_{0s}f_{1s} - C_s (f_{1s}) - f_{Ms} \left(x_s^2 - \frac{5}{2}\right) S_{1s}^f(\psi) - f_{Ms}\left(x_s^2-\frac{3}{2}\right) S_{2s}^f(\psi) = \mathbb{S}_{0s}, \end{gather} where $S_{1s}^f(\psi)$ and $S_{2s}^f(\psi)$ are unknowns such that $S_{1s}^f$ provides a particle source and $S_{2s}^f$ provides a heat source. The collisionless trajectory operator in SFINCS coordinates is \begin{gather} \mathbb{L}_{0s} = \dot{\bm{r}} \cdot \nabla + \dot{x}_s \partder{}{x_s} + \dot{\xi}_s \partder{}{\xi_s}, \label{eq:L_0s} \end{gather} and the inhomogeneous drive term is $\mathbb{S}_{0s} = - (\bm{v}_{\text{m}s} \cdot \nabla \psi) \partial f_{Ms}/\partial \psi$. The source functions are determined via the requirement that $\langle \int d^3 v \, f_{1s} \rangle_{\psi} = 0$ and $\langle \int d^3 v \, x_s^2 f_{1s}\rangle_{\psi} = 0$ (i.e. $f_{1s}$ does not provide net density or pressure). So, the following system of equations is solved, \begin{gather} \underbrace{\left[ \begin{array}{ccc} \mathbb{L}_{0s} -C_s & - f_{Ms} (x_s^2-\frac{5}{2}) & -f_{Ms} (x_s^2-\frac{3}{2}) \\ \mathbb{L}_{1s} & 0 & 0 \\ \mathbb{L}_{2s} & 0 & 0 \end{array} \right]}_{\mathbb{L}_s} \underbrace{\left[ \begin{array}{c} f_{1s} \\ S_{1s}^{f} \\ S_{2s}^{f} \end{array} \right]}_{F_s} = \underbrace{\left[ \begin{array}{c} \mathbb{S}_{0s} \\ 0 \\ 0 \end{array} \right]}_{\mathbb{S}_s}. \label{eq:dke_array} \end{gather} The velocity-space averaging operations are denoted $\mathbb{L}_{1s}f_{1s} = \langle \int d^3 v \, f_{1s} \rangle_{\psi}$ and $\mathbb{L}_{2s}f_{1s} = \langle \int d^3 v \, f_{1s} x_s^2 \rangle_{\psi}$. The full multi-species system can be written as, \begin{gather} \left[ \begin{array}{c} \mathbb{L}_{1} \\ \vdots \\ \mathbb{L}_{N_{\text{species}}} \end{array} \right] \left[ \begin{array}{c} F_{1} \\ \vdots \\ F_{N_{\text{species}}} \end{array} \right] = \left[ \begin{array}{c} \mathbb{S}_{1} \\ \vdots \\ \mathbb{S}_{N_{\text{species}}} \end{array} \right]. \label{eq:dke_species_array} \end{gather} Here the linear systems corresponding to each species as in \eqref{eq:dke_array} are coupled through the collision operator. We use the following notation to refer to the above system, \begin{gather} \mathbb{L} F = \mathbb{S}. \label{eq:linear} \end{gather} \section{Adjoint approach} \label{sec:adjoint_approach} The goal of the adjoint neoclassical approach is to efficiently compute derivatives of a moment of the distribution function, $\mathcal{R}$ (e.g. $V_{||,s}, \Gamma_s, Q_s, J_b, J_r, Q_{\text{tot}})$, with respect to many parameters. Consider a set of parameters, $\Omega = \{ \Omega_i\}_{i=1}^{N_{\Omega}}$, on which $\mathcal{R}$ depends. Computing a forward difference derivative with respect to $\Omega$ requires $ N_{\Omega} + 1$ solutions of \eqref{eq:linear}. With the adjoint approach, $\partial \mathcal{R}/\partial \Omega$ can be computed with one solution of \eqref{eq:linear} and one solution of a linear adjoint equation of the same size as \eqref{eq:linear}. Thus if $N_{\Omega}$ is very large and the solution to \eqref{eq:linear} is computationally expensive to obtain, the adjoint approach can reduce the cost by $N_{\Omega}$. For stellarator optimization, it is desirable to compute derivatives with respect to parameters which describe the magnetic geometry. In fully three-dimensional geometry, $N_{\Omega}$ is $\mathcal{O}(10^2)$ and solving \eqref{eq:linear} is the most expensive part of computing $\mathcal{R}$ (rather than constructing the linear system or taking a moment of the distribution function). Thus the adjoint approach can provide a computational savings of a factor of $\mathcal{O}(10^2)$. The adjoint method is also advantageous over numerical derivatives, as it avoids additional noise from discretization error. In what follows we consider $\Omega$ to be a set of parameters describing the magnetic geometry, which will be specified in section \ref{sec:implementation}. We compute the derivatives of $\mathcal{R}$ using two approaches. In the first approach, we define an inner product which involves integrals over the distribution function, and an adjoint operator is obtained with respect to this inner product. This we refer to as the continuous approach. In the second approach, we consider the DKE after discretization, defining an adjoint operator with respect to the Euclidean dot product. This we refer to as the discrete approach. While these approaches should provide identical results within discretization error, the advantages and drawbacks of each approach will be discussed at the end of section \ref{sec:discrete}. \subsection{Continuous approach} \label{sec:continuous} Let $F = \{F_s\}_{s=1}^{N_{\text{species}}}$ be the set of unknowns computed with SFINCS before discretization, denoted by the column vector in \eqref{eq:dke_species_array} with $F_s$ given by \eqref{eq:dke_array}. That is, $F$ consists of a set of $N_{\text{species}}$ distribution functions over $(\theta,\zeta,x_s,\xi_s)$ and their associated source functions. We define an inner product between two such quantities in the following way, \begin{align} \langle F,G \rangle = \sum_s \left \langle \int d^3 v \, \frac{f_{1s} g_{1s}}{f_{Ms}} \right \rangle_{\psi} + S_{1s}^f S_{1s}^g + S_{2s}^f S_{2s}^g. \label{eq:inner_product} \end{align} Here the superscript on $S_{1s}$ and $S_{2s}$ denotes the distribution function with which the source functions are associated and the sum is over species. The space of continuous functions, $F$, of this form such that $\langle F,F \rangle$ is bounded will be denoted by $\mathcal{H}$. It can be seen that \eqref{eq:inner_product} is indeed an inner product, as it satisfies conjugate symmetry ($\langle G,F \rangle = \langle F,G \rangle$ $\forall F,G \in \mathcal{H}$), linearity ($\langle F + G,H \rangle = \langle F,H \rangle+ \langle G,H \rangle$ $\forall F,G,H \in \mathcal{H}$ and $\langle F, a G \rangle = a\langle F, G \rangle$ $\forall F,G \in \mathcal{H}$, $a \in \mathbb{R}$), and positive definiteness ($\langle F, F \rangle \geq 0$ and $\langle F,F \rangle = 0$ only if $F = 0$ $\forall F \in \mathcal{H}$) \citep{Rudin2006}. This implies that if $\mathcal{H}$ is finite-dimensional, then for any linear operator $L$ there exists a unique adjoint operator $L^{\dagger}$ such that $\langle LF,G \rangle = \langle F, L^{\dagger}G \rangle$ for all $F, G \in \mathcal{H}$. While here $\mathcal{H}$ is not finite-dimensional, we will show that such an adjoint operator exists for this inner product. Note that the norm associated with this inner product $|| F || = \sqrt{\langle F,F \rangle}$ is similar to the free energy norm, \begin{gather} W = \sum_s \left \langle \int d^3 v \, \frac{T_s f_{1s}^2}{2f_{Ms}} \right \rangle_{\psi}, \end{gather} which obeys a conservation equation in gyrokinetic theory \citep{Krommes1994,Abel2013,Landreman2015}. The choice of inner product \eqref{eq:inner_product} is advantageous, as the linearized Fokker-Planck collision operator becomes self-adjoint for species linearized about Maxwellians with the same temperature. In what follows, we assume that all included species are of the same temperature. This assumption could be lifted, with a modification to the collision operator that appears in the adjoint equation (see appendix \ref{app:collision}). This assumption is not necessary when using the discrete approach (see section \ref{sec:discrete}). Consider a moment of the distribution function $\mathcal{R} \in \{ V_{||,s}, \Gamma_s, Q_s, J_b, J_r, Q_{\text{tot}}\}$, which can be written as an inner product with a vector $\widetilde{\mathcal{R}} \in \mathcal{H}$, \begin{gather} \mathcal{R} = \langle F, \widetilde{\mathcal{R}} \rangle, \label{eq:inner_product_R} \end{gather} according to \eqref{eq:inner_product}. For example, \begin{gather} \widetilde{J_r} = \left[ \begin{array}{c} q_s \bm{v}_{\text{m}s} \cdot \nabla \psi f_{Ms} \\ 0 \\ 0 \end{array} \right]_{s=1}^{N_{\text{species}}}, \end{gather} where the column structure corresponds with that in \eqref{eq:dke_array} and \eqref{eq:dke_species_array}. We are interested in computing the derivative of $\mathcal{R}$ with respect to a set of parameters, $\Omega = \{\Omega_i\}_{i=1}^{N_{\Omega}}$. This derivative can be computed with the chain rule, \begin{align} \left(\partder{\mathcal{R}}{\Omega_i}\right)_{\mathbb{L}F=\mathbb{S}} = \left(\partder{\mathcal{R}}{\Omega_i}\right)_{F} + \left\langle \widetilde{\mathcal{R}}, \left(\partder{F}{\Omega_i}\right)_{\mathbb{L} F = \mathbb{S}} \right\rangle. \label{eq:derivative} \end{align} The subscripts in \eqref{eq:derivative} denote the quantity that is held fixed while the derivative is computed. The first term on the right hand side accounts for the explicit dependence on $\Omega_i$ while the second accounts for the implicit dependence on $\Omega_i$ through $F$. Here $\left(\partial F/\partial \Omega_i\right)_{\mathbb{L} F =\mathbb{S}}$ can be computed by considering perturbations to the linear system \eqref{eq:linear}, noting that in general both $\mathbb{L}$ and $\mathbb{S}$ can depend on $\Omega$, \begin{align} \mathbb{L} \left(\partder{F}{\Omega_i}\right)_{\mathbb{L}F = \mathbb{S}} = \left(\partder{\mathbb{S}}{\Omega_i} - \partder{\mathbb{L}}{\Omega_i} F\right). \label{eq:forward} \end{align} Computing $\left(\partial F/\partial \Omega\right)_{\mathbb{L}F=\mathbb{S}}$ using \eqref{eq:forward} requires solving $N_{\Omega}$ linear systems of the same dimension as the DKE \eqref{eq:linear}. To avoid this additional computational cost, we instead solve an adjoint equation, \begin{align} \mathbb{L}^{\dagger}q^{\mathcal{R}}=\widetilde{\mathcal{R}}. \label{eq:adjoint} \end{align} In what follows, we show that the adjoint variable, $q^{\mathcal{R}}$, can be used to compute $\left(\partial \mathcal{R}/\partial \Omega\right)_{\mathbb{L}F = \mathbb{S}}$ without solving \eqref{eq:forward} for each $\Omega_i$. Using \eqref{eq:adjoint} with \eqref{eq:derivative}, \begin{align} \left(\partder{\mathcal{R}}{\Omega_i} \right)_{\mathbb{L}F = \mathbb{S}} = \left(\partder{\mathcal{R}}{\Omega_i} \right)_F + \left \langle \mathbb{L}^{\dagger} q^{\mathcal{R}}, \left( \partder{F}{\Omega_i}\right)_{\mathbb{L}F=\mathbb{S}} \right \rangle, \end{align} and applying the adjoint property, we obtain \begin{align} \left(\partder{\mathcal{R}}{\Omega_i} \right)_{\mathbb{L}F = \mathbb{S}} &= \left(\partder{\mathcal{R}}{\Omega_i} \right)_F + \left \langle q^{\mathcal{R}}, \mathbb{L} \left(\partder{F}{\Omega_i}\right)_{\mathbb{L}F=\mathbb{S}} \right \rangle. \end{align} Using \eqref{eq:forward}, \begin{gather} \left(\partder{\mathcal{R}}{\Omega_i} \right)_{\mathbb{L}F = \mathbb{S}} = \left(\partder{\mathcal{R}}{\Omega_i} \right)_F + \left \langle q^{\mathcal{R}}, \left( \partder{\mathbb{S}}{\Omega_i} - \partder{\mathbb{L}}{\Omega_i} F \right) \right \rangle. \label{eq:derivative_adjoint} \end{gather} So, \eqref{eq:derivative_adjoint} provides the same derivative information as \eqref{eq:derivative}. Thus, using (\ref{eq:derivative_adjoint}), the derivative with respect to $\Omega$ can be computed with the solution to two linear systems, (\ref{eq:linear}) and (\ref{eq:adjoint}). The partial derivatives on the right hand side of \eqref{eq:derivative_adjoint} can be computed analytically by considering the explicit geometric dependence of $\mathcal{R}$, $\mathbb{L}$, and $\mathbb{S}$. When $N_{\Omega}$ is large, the cost of computing $\partial \mathcal{R}/\partial \Omega$ using \eqref{eq:derivative_adjoint} is dominated not by the linear solve but by constructing $\partial \mathbb{S}/\partial \Omega$ and $\partial \mathbb{L}/\partial \Omega$ and computing the inner product. Thus the cost still scales with $N_{\Omega}$. However, we obtain a significant savings in comparison with forward difference derivatives, as shown in section \ref{sec:implementation}. The adjoint operator for each species takes the following form, \begin{gather} \mathbb{L}_s^{\dagger} = \left[ \begin{array}{c c c} \mathbb{L}_{0s}^{\dagger} -C_s & f_{Ms} & f_{Ms} x^2 \\ \mathbb{L}_{1s}^{\dagger} & 0 & 0 \\ \mathbb{L}_{2s}^{\dagger} & 0 & 0 \end{array} \right], \label{eq:L_dagger} \end{gather} where $\mathbb{L}_{1s}^{\dagger} = 5/2\mathbb{L}_{1s}-\mathbb{L}_{2s}$ and $\mathbb{L}_{2s}^{\dagger} = 3/2 \mathbb{L}_{1s} - \mathbb{L}_{2s}$. The same column structure is used as for the forward operator \eqref{eq:dke_species_array}, $\mathbb{L}^{\dagger} = \{ \mathbb{L}_s^{\dagger} \}_{i=1}^{N_{\text{species}}}$. The quantity $\mathbb{L}_{0s}^{\dagger}$ satisfies $\langle \int d^3 v \, g_{1s} \mathbb{L}_{0s} f_{1s}/f_{Ms} \rangle_{\psi} = \langle \int d^3 v \, f_{1s} \mathbb{L}_{0s}^{\dagger} g_{1s}/f_{Ms} \rangle_{\psi}$ and depends on which trajectory model is applied. The expression \eqref{eq:L_dagger} can be verified by noting that \begin{align} \langle \mathbb{L} F, G \rangle &= \sum_s \left \langle \frac{f_{1s}\left((\mathbb{L}^{\dagger}_{0s} - C_s )g_{1s} + f_{Ms} \left( S_{1s}^g + S_{2s}^g x_s^2\right)\right)}{f_{Ms}}\right \rangle_{\psi} + S_{1s}^f\mathbb{L}_{1s}^{\dagger} g_{1s} + S_{2s}^f \mathbb{L}_{2s}^{\dagger} g_{1s} \nonumber \\ &= \langle F,\mathbb{L}^{\dagger} G \rangle. \end{align} For the DKES trajectories the adjoint operator is, \begin{gather} \mathbb{L}_{0s}^{\dagger} = - \mathbb{L}_{0s}. \label{eq:dkes_adjoint} \end{gather} This anti-self-adjoint property is used in obtaining the variational principle which provides bounds on neoclassical transport coefficients in the DKES code \citep{Rij1989}. For full trajectories it is, \begin{gather} \mathbb{L}_{0s}^{\dagger} = -\mathbb{L}_{0s} + \frac{q_s}{T_s} \der{\Phi}{\psi} \bm{v}_{\text{m}s} \cdot \nabla \psi. \label{eq:full_adjoint} \end{gather} The anti-self-adjoint property does not hold for this trajectory model as the $\bm{E} \times \bm{B}$ drift \eqref{eq:full_ve} is no longer divergenceless. See \cref{ap:adjoint_operators} for details on obtaining these adjoint operators. \subsection{Discrete approach} \label{sec:discrete} Next we consider the discrete adjoint approach. Let $\overrightarrow{\bm{F}}$ be the set of unknowns computed with SFINCS after discretization of $F$. The linear DKE \eqref{eq:linear} upon discretization can then be written schematically as \begin{gather} \overleftrightarrow{\bm{L}} \overrightarrow{\bm{F}} = \overrightarrow{\bm{S}}. \label{eq:forward_discrete} \end{gather} In this case, we can define an inner product as the vector dot product, \begin{gather} \langle \overrightarrow{\bm{F}}, \overrightarrow{\bm{G}} \rangle = \overrightarrow{\bm{F}} \cdot \overrightarrow{\bm{G}}. \end{gather} In real Euclidean space, the adjoint operator, $\left(\overleftrightarrow{\bm{L}}\right)^{\dagger}$, which satisfies \begin{gather} \left \langle \overleftrightarrow{\bm{L}} \overrightarrow{\bm{F}},\overrightarrow{\bm{G}} \right \rangle = \left \langle \overrightarrow{\bm{F}},\left(\overleftrightarrow{\bm{L}}\right)^{\dagger} \overrightarrow{\bm{G}} \right \rangle \end{gather} is simply the transpose of the matrix, $\left(\overleftrightarrow{\bm{L}}\right)^T$. Again, the moments of the distribution function, $\mathcal{R}$ can be expressed as an inner product with a vector $\overrightarrow{\bm{R}}$, \begin{gather} \mathcal{R} = \langle \overrightarrow{\bm{F}}, \overrightarrow{\bm{R}} \rangle. \end{gather} Using the discrete approach, the following adjoint equation must be solved \begin{gather} \left(\overleftrightarrow{\bm{L}}\right)^T \overrightarrow{\bm{q}}^{\mathcal{R}} = \overrightarrow{\bm{R}}. \label{eq:adjoint_discrete} \end{gather} The adjoint variable, $\overrightarrow{\bm{q}}^{\mathcal{R}}$, can again be used to compute $\left(\partial \mathcal{R}/\partial \Omega_i\right)_{\overleftrightarrow{\bm{L}}\overrightarrow{\bm{F}} = \overrightarrow{\bm{S}}}$, \begin{gather} \left(\partder{\mathcal{R}}{\Omega_i} \right)_{\overleftrightarrow{\bm{L}}\overrightarrow{\bm{F}}=\overrightarrow{\bm{S}}} = \left(\partder{\mathcal{R}}{\Omega_i} \right)_{\overrightarrow{\bm{F}}} + \left \langle \overrightarrow{\bm{q}}^{\mathcal{R}}, \left( \partder{\overrightarrow{\bm{S}}}{\Omega_i} - \partder{\overleftrightarrow{\bm{L}}}{\Omega_i} \overrightarrow{\bm{F}} \right) \right \rangle. \label{eq:adjoint_diagnostic_discrete} \end{gather} As with the continuous approach, the partial derivatives on the right hand side can be computed analytically. In this way, the derivative of $\mathcal{R}$ with respect to $\Omega$ can be computed with only two linear solves, \eqref{eq:forward_discrete} and \eqref{eq:adjoint_discrete}. In the SFINCS implementation, the DKE is typically solved with the preconditioned GMRES algorithm. In the continuous approach, a preconditioner matrix for both the forward and adjoint operator must be $LU$-factorized. Here the preconditioner matrix is the same as the full matrix but without cross-species or speed coupling. As the adjoint matrix is sufficiently different from the forward matrix, we do not obtain convergence when the same preconditioner is used for both problems. However, in the discrete approach, the $LU$-factorization for the preconditioner of the forward matrix can be reused for the preconditioner of the adjoint matrix (If a matrix $A$ has been factorized as $A = LU$ then $A^{T} = U^T L^T$ where $U^T$ is lower triangular and $L^T$ is upper triangular). This provides a significant reduction in memory and computational cost for the discrete approach. Furthermore, the discrete adjoint approach provides the exact derivatives for the discretized problem. With this method the adjoint equation is obtained using the vector dot product and matrix transpose which can be computed without any numerical approximation. The error in the derivatives obtained by the adjoint method are therefore only limited by the tolerance to which the linear solve is performed with GMRES. On the other hand, the continuous adjoint approach relies on a continuous inner product which must ultimately be approximated numerically. Thus the continuous approach provides the exact derivatives only in the limit that the discrete approximation of the inner product exactly reproduces the continuous inner product. Therefore we expect the results of the discrete and adjoint approaches to agree within discretization error, as will be demonstrated in section \ref{sec:implementation}. The continuous approach can be advantageous in that an adjoint equation may be prescribed independently of the discretization scheme. Note that in the discrete approach, the adjoint operator is obtained from the matrix transpose of the discretized forward operator, which implies that the same spatial and velocity resolution parameters must be used for both the forward and adjoint solutions. In this work we will employ the same discretization parameters for both the adjoint and forward problems, but this restriction is not required for the continuous approach. \section{Implementation and benchmarks} \label{sec:implementation} The adjoint method has been implemented in the SFINCS code using both the discrete and continuous approaches. The magnetic geometry is specified in Boozer coordinates \citep{Helander2014} such that the covariant form of the magnetic field is, \begin{gather} \bm{B} = I(\psi) \nabla \theta + G(\psi) \nabla \zeta + K(\psi,\theta,\zeta) \nabla \psi, \label{eq:boozer_covariant} \end{gather} where $I(\psi) = \mu_0 I_T(\psi)/2\pi$ and $G(\psi) = \mu_0 I_P(\psi)/2\pi$, $I_T(\psi)$ is the toroidal current enclosed by $\psi$, and $I_P(\psi)$ is the poloidal current outside of $\psi$. The contravariant form is \begin{gather} \bm{B} = \nabla \psi \times \nabla \theta - \iota(\psi) \nabla \psi \times \nabla \zeta, \label{eq:boozer_contravariant} \end{gather} where $\iota(\psi)$ is the rotational transform. The Jacobian is obtained from dotting \eqref{eq:boozer_covariant} with \eqref{eq:boozer_contravariant}, \begin{gather} \sqrt{g} = \frac{G(\psi) + \iota(\psi) I(\psi)}{B^2}. \label{eq:jacobian} \end{gather} As $K(\psi,\theta,\zeta)$ does not appear in any of the trajectory coefficients (\eqref{eq:full_trajectories} and \eqref{eq:dkes_trajectories}), in the drive term in \eqref{eq:dke_model}, or in the geometric factors used to define the moments of the distribution function (\cref{eq:parallel_flow,eq:particle_flux,eq:heat_flux}), all the geometric dependence enters through $B(\psi,\theta,\zeta)$, $G(\psi)$, $I(\psi)$, and $\iota(\psi)$. We choose to use Boozer coordinates for these computations as it reduces the number of geometric parameters that must be considered, but the neoclassical adjoint method is not limited to this choice of coordinate system. We approximate $B$ by a truncated Fourier series, \begin{gather} B = \sum_{j} B_{m_jn_j}^c \cos(m_j\theta-n_j \zeta) \label{eq:B_Fourier}, \end{gather} where $j$ sums over Fourier modes $m_j \leq m_{\max}$ and $|n_j| \leq n_{\max}$ such that $n_j$ is an integer multiple of $N_P$, the number of field periods. In \eqref{eq:B_Fourier}, we have assumed stellarator symmetry such that $B(-\theta,-\zeta) = B(\theta,\zeta)$, and $N_p$ symmetry such that $B(\theta,\zeta+2\pi/N_P) = B(\theta,\zeta)$. Thus we compute derivatives with respect to the parameters $\Omega = \{B_{mn}^c, I(\psi), G(\psi), \iota(\psi)$\}. Additionally, derivatives with respect to $E_r$ are computed, which are used for efficient ambipolar solutions and computing derivatives of geometric quantities at ambipolarity (see section \ref{sec:ambipolarity}) rather than at fixed $E_r$. To demonstrate, we compute $\partial \mathcal{R}/\partial B_{00}^c$ for moments of the ion distribution function using the discrete and continuous adjoint methods. A 3-mode model of the standard configuration W7-X geometry at $\rho = \sqrt{\psi/\psi_0} = 0.5$ is used (table 1 in \cite{Beidler2011}), \begin{gather} B = B_{00}^c + B_{01}^c \cos(N_P\zeta) + B_{11}^c \cos(\theta - N_P \zeta) + B_{10}^c \cos(\theta), \end{gather} where $B_{01}^c = 0.04645 B_{00}^c$, $B_{11}^c = -0.04351 B_{00}^c$, and $B_{10}^c = -0.01902 B_{00}^c$. Electron and ion ($Z=1$) species are included, and the derivatives are computed at the ambipolar $E_r$ with the full trajectory model. The derivatives are also computed with a forward difference approach with varying step size $\Delta B_{00}^c$. In figure \ref{fig:benchmark_fixedEr} we show the fractional difference between $\partial \mathcal{R}/\partial B_{00}^c$ computed using the adjoint method and with forward difference derivatives. We see that at large values of $\Delta B_{00}^c$, the adjoint and numerical derivatives begin to differ significantly due to discretization error from the forward difference approximation. The fractional error decreases proportional to $(\Delta B_{00}^c)$ as expected until the rounding error begins to dominate \citep{Sauer2012} when $\Delta B_{00}^c/(B_{00}^c)$ is approximately $10^{-4}$, where $B_{00}^c$ is the value of the unperturbed mode. The discrete and continuous approaches show qualitatively similar trends, though the minimum fractional difference is lower in the discrete approach due to the additional discretization error that arises with the continuous approach. With sufficient resolution parameters (41 $\theta$ grid points, 61 $\zeta$ grid points, 85 $\xi$ basis functions, and 7 $x$ basis functions), the fractional error of the continuous approach is $\leq 0.1 \%$ and should not be significant for most applications. We find similar agreement for other derivatives and with the DKES trajectory model. To demonstrate that the discrete and continuous methods indeed produce the same derivative information, we compute the fractional difference between the derivatives computed with the two methods as a function of the resolution parameters. As an example, in figure \ref{fig:continuous_discrete} we show the fractional difference in $\partial Q_i/\partial \iota$, where $Q_i$ is the radial ion heat flux, as a function of the number of Legendre polynomials used for the pitch angle discretization, $N_{\xi}$, keeping the other resolution parameters fixed. As $N_{\xi}$ is increased, the fractional differences converges to a finite value, approximately $10^{-4}$, due to the discretization error in the other resolution parameters. Similar resolution parameters are required for the convergence of the moment itself, $Q_i$, and its derivative computed with the continuous method, $\partial Q_i/\partial \iota$. Convergence of $Q_i$ within 5\% is obtained with $N_{\xi} = 38$, similar to that required for the convergence of $\partial Q/\partial \iota$, as can be seen in figure \ref{fig:continuous_discrete}. In figure \ref{fig:computational_time} we compare the cost of calculating derivatives of one moment with respect to $N_{\Omega}$ parameters using the continuous and discrete adjoint methods and forward difference derivatives. All computations are performed on the Edison computer at NERSC using 48 processors, and the elapsed wall time is reported. Here we include the cost of solving the linear system and computing diagnostics $N_{\Omega} + 1$ times for the forward difference approach, and the cost of solving the forward and adjoint linear systems and computing diagnostics for the adjoint approaches. The cost of the continuous approach is slightly more than that of the discrete approach due to the cost of factorizing the adjoint preconditioner. However, at large $N_{\Omega}$ the cost of computing diagnostics for the adjoint approach (e.g. computing $\partial \mathbb{S}/\partial \Omega$ and $\partial \mathbb{L}/\partial \Omega$ and performing the inner product in \eqref{eq:derivative_adjoint}) dominates that of solving the adjoint linear system; thus the discrete and continuous approaches become comparable in cost. In this regime, the adjoint approach provides speed-up by a factor of approximately $50$. \begin{figure} \begin{center} \begin{subfigure}[c]{0.422\textwidth}\includegraphics[trim=1cm 6cm 7.2cm 7cm,clip,width=1.0\textwidth]{B00_full_discrete.pdf} \caption{Discrete approach} \end{subfigure} \begin{subfigure}[c]{0.56\textwidth}\includegraphics[trim=2cm 6cm 2cm 7cm,clip,width=1.0\textwidth]{B00_full_continuous.pdf} \caption{Continuous approach} \end{subfigure} \caption{Fractional difference between derivatives with respect to $B_{00}^c$ computed with the adjoint method and with a forward difference derivative with step size $\Delta B_{00}^c$. The full trajectory model was used with (a) the discrete and (b) the continuous adjoint approaches.} \label{fig:benchmark_fixedEr} \end{center} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6.5cm 2cm 7cm,clip,width=1.0\textwidth]{continuous_discrete_xi_scan.pdf} \caption{} \label{fig:continuous_discrete} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6.5cm 2cm 7cm,clip,width=1.0\textwidth]{computational_cost.pdf} \caption{} \label{fig:computational_time} \end{subfigure} \caption{(a) The fractional difference between $\partial Q_i/\partial \iota$ computed with the continuous and discrete approaches converges with the number of pitch angle Legendre modes, $N_{\xi}$. (b) Comparison of the computational cost of computing $\partial \mathcal{R}/\partial \Omega$ with forward difference derivatives and the adjoint approach as a function of $N_{\Omega}$, the number of parameters in the gradient.} \label{fig:discrete_continuous_scan} \end{figure} \section{Applications of the adjoint method} \label{sec:applications} \subsection{Local magnetic sensitivity analysis} \label{sec:local_sensitivity} Using the adjoint method, it is possible to compute derivatives of a moment of the distribution function with respect to the Fourier amplitudes of the field strength, $\{ \partial \mathcal{R}/\partial B_{mn}^c\}$. Rather than consider sensitivity in Fourier space, we would like to compute the sensitivity to \textit{local} perturbations of the field strength. We now quantify the relationship between these two representations of sensitivity information. Consider the G\^{a}teaux functional derivative \citep{Delfour2011b} of $\mathcal{R}$ with respect to $B$, \begin{gather} \delta \mathcal{R}(\delta B;B(\bm{r})) = \lim_{\epsilon \rightarrow 0} \frac{\mathcal{R}(B(\bm{r}) + \epsilon \delta B(\bm{r}))-\mathcal{R}(B(\bm{r}))}{\epsilon}. \label{eq:functional_derivative} \end{gather} Here we consider a perturbation to the field strength at fixed $I(\psi)$, $G(\psi)$, and $\iota(\psi)$. As $\delta \mathcal{R}(\delta B;B(\bm{r}))$ is a linear functional of $\delta B$, by the Riesz representation theorem \citep{Rudin2006}, $\delta \mathcal{R}$ can be expressed as an inner product with $\delta B$ and some element of the appropriate space. The function $\delta B$ is defined on a flux surface, $\psi$; thus it is sensible to express $\delta \mathcal{R}$ in the following way, \begin{gather} \delta \mathcal{R}(\delta B; B(\bm{r})) = \left \langle S_{\mathcal{R}} \delta B(\bm{r}) \right \rangle_{\psi}. \label{eq:magnetic_sensitivity} \end{gather} Here $\delta B(\bm{r})$ describes the local perturbation to the field strength, and $\delta \mathcal{R}$ quantifies the corresponding change to the moment $\mathcal{R}$. The function $S_{\mathcal{R}}$ is analogous to the shape gradient, which quantifies the change in a figure of merit which results from a differential perturbation to a shape \citep{Landreman2018}. The shape gradient will be discussed further in section \ref{sec:equilibria_opt}. Suppose that $B$ is stellarator symmetric and $N_P$ symmetric. If $E_r = 0$, then $S_{\mathcal{R}}$ must also possess stellarator and $N_P$ symmetry (see appendix \ref{app:symmetry}). However, when $E_r \neq 0$, $S_{\mathcal{R}}$ is no longer guaranteed to have stellarator symmetry. Nonetheless, it may be desirable to ignore the stellarator-asymmetric part of $S_{\mathcal{R}}$ if an optimized stellarator-symmetric configuration is desired. For the remainder of this work, we will make this assumption, though the analysis could be extended to consider the effect of breaking of stellarator symmetry. The quantity $S_{\mathcal{R}}$ can be approximated by a truncated Fourier series under these assumptions, \begin{gather} S_{\mathcal{R}} = \sum_{k} S_{m_kn_k} \cos(m_k \theta - n_k \zeta), \end{gather} where $k$ sums over $m \leq m_{\max}$ and $|n| \leq n_{\max}$ such that $n$ is an integer multiple of $N_P$. The quantity $\delta B(\bm{r})$ can be written in terms of perturbations to the Fourier coefficients, \begin{gather} \delta B(\bm{r}) = \sum_{j} \delta B_{m_jn_j}^c \cos(m_j \theta - n_j \zeta), \end{gather} where again the sum is only taken over $N_P$ symmetric modes. Now $\delta \mathcal{R}$ can be written in terms of perturbations to the Fourier coefficients, \begin{gather} \delta \mathcal{R} = \sum_{j} \partder{\mathcal{R}}{B_{m_jn_j}^c} \delta B_{m_jn_j}^c. \end{gather} In this way, \eqref{eq:magnetic_sensitivity} can be expressed as a linear system, \begin{gather} \partder{\mathcal{R}}{B_{m_jn_j}^c} = \sum_k D_{jk} S_{m_kn_k}, \end{gather} where \begin{gather} D_{jk} = V'(\psi)^{-1} \int_{0}^{2\pi} d \theta \int_0^{2\pi} d \zeta \, \sqrt{g} \cos(m_j \theta - n_j \zeta) \cos(m_k \theta - n_k \zeta). \end{gather} If the same number of modes is used to discretize $\delta \mathcal{R}$ and $S_{\mathcal{R}}$, then the linear system is square. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=2cm 0cm 2cm 3cm,clip,width=1.0\textwidth]{S_Jb_fsa.pdf} \caption{} \label{fig:bootstrap_local_sensitivity} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=2cm 0cm 2cm 3cm,clip,width=1.0\textwidth]{S_gamma_fsa.pdf} \caption{} \label{fig:particleFlux_sensitivity} \end{subfigure} \caption{(a) The local magnetic sensitivity function for the bootstrap current, $S_{J_b}$, is shown for the W7-X standard configuration. Positive values indicate that increasing the field strength at a given location will increase $J_b$ through \eqref{eq:magnetic_sensitivity}. (b) The local sensitivity function for the ion particle flux, $S_{\Gamma_i}$.} \end{figure} In contrast to derivatives with respect to the Fourier modes of $B$, the sensitivity function, $S_{\mathcal{R}}$, is a spatially local quantity, quantifying the change in a figure of merit resulting from a local perturbation of the field strength. In this way, $S_{\mathcal{R}}$ can inform where perturbations to the magnetic field strength can be tolerated. The sensitivity function could be related directly to a local magnetic tolerance using the method described in section 9 of \citep{Landreman2018}. In contrast with that work, here we are considering perturbations to the field strength on any flux surface rather than at the plasma boundary. However, $S_{\mathcal{R}}$ still provides insight into where trim coils should be placed or coil displacements can be tolerated without sacrificing desired neoclassical properties. The sensitivity function can also be used for gradient-based optimization in the space of the field strength on a flux surface as in section \ref{sec:vacuum_opt}. We compute $S_{J_b}$ for the W7-X standard configuration at $\rho = 0.70$, shown in figure \ref{fig:bootstrap_local_sensitivity}. We use a fixed-boundary equilibrium that preceded the coil design and does not include coil ripple, and the full equilibrium is used rather than the truncated Fourier series considered in section \ref{sec:implementation}. The same resolution parameters are used as in section \ref{sec:implementation}, and derivatives with respect to $B_{mn}^c$ are computed for $m_{\max} = n_{\max} = 20$. The largest modes for this configuration are the helical curvature $B_{11}^c$, the toroidal curvature $B_{10}^c$, and the toroidal mirror $B_{01}^c$. We find that $S_{J_b}$ is large and negative on the inboard side, indicating that increasing the magnitude of the toroidal curvature component of $B$ would lead to an increase in $J_b$. This result is in agreement with previous analysis of the dependence of the bootstrap current on these three modes in the W7-X magnetic configuration space (\cite{Maassberg1993}), which found that at low collisionality the bootstrap current coefficients depend strongly on the toroidal curvature. In figure \ref{fig:particleFlux_sensitivity} is the sensitivity function for the ion particle flux, $S_{\Gamma_i}$, computed for the same configuration using $m_{\max} = 20$ and $n_{\max} = 25$. We find that the particle flux is more sensitive to perturbations on the outboard side in localized regions, while on the inboard side the sensitivity is relatively small in magnitude. \subsection{Gradient-based optimization} \subsubsection{Optimization of field strength} \label{sec:vacuum_opt} As a second demonstration of the adjoint neoclassical method, we consider optimizing in the space of the field strength on a surface, taking $\Omega = \{B_{mn}^c\}$. As Boozer coordinates are used, the covariant form \eqref{eq:boozer_covariant} satisfies $(\nabla \times \bm{B}) \cdot \nabla \psi = 0$ and the contravariant form \eqref{eq:boozer_contravariant} satisfies $\nabla \cdot \bm{B} = 0$. As we will artificially modify the field strength while keeping other geometry parameters fixed, the resulting field will not necessarily satisfy both of these conditions with both the covariant and contravariant forms. While there is no guarantee that the resulting field strength will be consistent with a global equilibrium solution, it provides insight into how local changes to the field strength can impact neoclassical properties. As a second step, the outer boundary could be optimized to match the desired field strength on a single surface. In section \ref{sec:equilibria_opt}, we discuss how the derivatives computed in this work could be coupled to optimization of an MHD equilibrium. We perform optimization with a BFGS quasi-Newton method \citep{Nocedal1999} using an objective function $\chi^2 = J_b^2$. A backtracking line search is used at each iteration to find a step size that satisfies a condition of sufficient decrease of $\chi^2$. We use the same equilibrium as in section \ref{sec:local_sensitivity}, retaining modes $m \leq 12$ and $|n| \leq 12$, and compute derivatives with respect to these modes. Convergence to $\chi^2 \leq 10^{-10}$ was obtained within 8 BFGS iterations (28 function evaluations), as shown in figure \ref{fig:bfgs_convergence}. The difference in field strength between the initial and optimized configuration, $B_{\text{opt}}-B_{\text{init}}$, is shown in figure \ref{fig:B_opt}. As expected from the analysis in section \ref{sec:local_sensitivity}, the field strength increased on the outboard side and decreased on the inboard side in comparison with $B_{\text{init}}$. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 2cm 6cm,clip,width=1.0\textwidth]{bfgs_convergence.pdf} \caption{} \label{fig:bfgs_convergence} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 2cm 6cm,clip,width=1.0\textwidth]{B_opt.pdf} \caption{} \label{fig:B_opt} \end{subfigure} \caption{(a) Convergence of $\chi^2 = J_b^2$ for optimization over $\Omega = \{B_{mn}^c\}$ with an adjoint-based BFGS method. (b) The change in field strength from the initial to optimized configuration.} \label{fig:opt} \end{figure} \subsubsection{Optimization of MHD equilibria} \label{sec:equilibria_opt} The local sensitivity function, $S_{\mathcal{R}}$, along with $\partial \mathcal{R}/\partial I$, $\partial \mathcal{R}/\partial G$, and $\partial \mathcal{R}/\partial \iota$, can be used to determine how perturbations to the outer boundary of the plasma, $\partial \Gamma$, result in perturbations to $\mathcal{R}$. This is quantified through the idea of the shape gradient, which is described below. The partial derivatives of $\mathcal{R}$ can be computed with the adjoint method outlined in section \ref{sec:adjoint_approach}, and the shape gradient can be obtained with only one additional MHD equilibrium solution through the application of another adjoint method. Consider a figure of merit which is integrated over a toroidal domain, $\Gamma$, \begin{gather} f_{\mathcal{R}}(\Gamma) = \int_{\Gamma} d^3 x \, w(\psi) \mathcal{R}(\psi), \end{gather} where $w(\psi)$ is a weighting function. That is, SFINCS is run on a set of $\psi$ surfaces within $\Gamma$ and the volume integral is computed numerically. Here we consider $\partial \Gamma$ to be the plasma boundary used for a fixed-boundary MHD equilibrium calculation. The perturbation to $f_{\mathcal{R}}$ resulting from normal perturbation to $\partial \Gamma$ can be written in the following form, \begin{gather} \delta f_{\mathcal{R}}(\Gamma;\delta \bm{r}) = \int_{\partial \Gamma} d^3 x \, \left( \delta \bm{r} \cdot \bm{n} \right) \mathcal{G}, \end{gather} under certain assumptions of smoothness \citep{Delfour2011a}. This can be thought of as another instance of the Riesz representation theorem, as $\delta f_{\mathcal{R}}$ is a linear functional of $\delta \bm{r}$. Here $\bm{n}$ is the outward unit normal on $\partial \Gamma$ and $\delta \bm{r}$ is a vector field describing the perturbation to the surface. Intuitively, only normal perturbations to $\partial \Gamma$ result in a change to $f_{\mathcal{R}}$. The shape gradient is $\mathcal{G}$, which quantifies the contribution of a local normal perturbation of the boundary to the change in $f_{\mathcal{R}}$. The shape gradient can be used for fixed-boundary optimization of equilibria or for analysis of sensitivity to perturbations of magnetic surfaces. It can be computed using a second adjoint method, where a perturbed MHD force balance equation is solved with the addition of a bulk force which depends on derivatives computed from the neoclassical adjoint method \citep{Antonsen2019}. While the continuous neoclassical adjoint method described in this work arises from the self-adjointness of the linearized Fokker-Planck operator, the adjoint method for MHD equilibria arises from the self-adjointness of the MHD force operator. In practice these two adjoint methods could be coupled by first computing an MHD equilibrium solution, computing neoclassical transport and its geometric derivatives from this equilibrium with the neoclassical adjoint method, and passing these derivatives back to the equilibrium code to compute the shape gradient with the perturbed MHD adjoint method. In this way derivatives of neoclassical quantities with respect to the shape of the outer boundary are computed with only two equilibrium solutions and two DKE solutions. This calculation will be reported in a future publication. Rather than solve an additional adjoint equation, the outer boundary could be optimized by numerically computing derivatives of $\{B_{mn}^c(\psi),G(\psi),I(\psi)\}$ with respect to the double Fourier series describing the outer boundary shape in cylindrical coordinates, $\{R_{mn}^c, Z_{mn}^s\}$, using a finite difference method. This could be done using the STELLOPT code \citep{Spong1998,Reiman1999} with BOOZ\_XFORM \citep{Sanchez2000} to perform the coordinate transformation. For example, if the rotational transform is held fixed in the VMEC equilibrium calculation \citep{Hirshman1983}, the derivative of a moment, $\mathcal{R}$, with respect to a boundary coefficient, $R_{mn}^c$, can be computed as, \begin{gather} \partder{\mathcal{R}(\psi)}{R_{mn}^c} = \sum_{m'n'}\partder{\mathcal{R}(\psi)}{B_{m'n'}^c(\psi)} \partder{B_{m'n'}^c(\psi)}{R_{mn}^c} + \partder{\mathcal{R}(\psi)}{G(\psi)}\partder{G(\psi)}{R_{mn}^c} + \partder{\mathcal{R}(\psi)}{I(\psi)}\partder{I(\psi)}{R_{mn}^c}, \end{gather} where $\partial \mathcal{R}(\psi)/\partial B_{mn}^c(\psi)$, $\partial \mathcal{R}(\psi)/\partial G(\psi)$, and $\partial \mathcal{R}(\psi)/\partial I(\psi)$ are computed with the neoclassical adjoint method and $\partial B_{mn}^c(\psi)/\partial R_{mn}^c$, $\partial G(\psi)/\partial R_{mn}^c$, and $\partial I(\psi)/\partial R_{mn}^c$ are computed with finite difference derivatives using STELLOPT. Similarly, derivatives of $\{B_{mn}^c(\psi),G(\psi),I(\psi)\}$ could be computed with respect to coil parameters using a free-boundary equilibrium solution, allowing for direct optimization of neoclassical quantities with respect to coil shapes. The neoclassical calculation with SFINCS is typically significantly more expensive than the equilibrium calculation (for the geometry discussed in section \ref{sec:local_sensitivity} fixed-boundary VMEC took 54 seconds while SFINCS took 157 seconds on 4 processors of the NERSC Edison computer). As such, combining adjoint-based with finite difference derivatives can still result in a significant computational savings. \subsection{Ambipolarity} \label{sec:ambipolarity} As stellarators are not intrinsically ambipolar, the radial electric field is not truly an independent parameter. The ambipolar $E_r$ must be obtained which satisfies the condition $J_r(E_r) = 0$. The application of adjoint-based derivatives for computing the ambipolar solution is discussed in section \ref{sec:ambipolar_sol}. An adjoint method to compute derivatives with respect to geometric parameters at fixed ambipolarity is discussed in section \ref{sec:deriv_ambipolarity}. \subsubsection{Accelerating ambipolar solve} \label{sec:ambipolar_sol} A non-linear root finding algorithm must be used to compute the ambipolar $E_r$. This root-finding can be accelerated with derivative information, such as with a Newton-Raphson method \citep{Press2007}. The derivative required, $\partial J_r/\partial E_r$, can be computed with the discrete or continuous adjoint method as described in section \ref{sec:adjoint_approach} with the replacement $\Omega_i \rightarrow E_r$, considering $\mathcal{R} = J_r$. We implement three non-linear root finding methods: Brent's method \citep{Brent2013}, the Newton-Raphson method, and a hybrid between the bisection and Newton-Raphson methods \citep{Press2007}. Brent's method guarantees at least linear convergence by combining quadratic interpolation with bisection and does not require derivatives. The Newton-Raphson method can provide quadratic convergence under certain assumptions but in general is not guaranteed to converge. If an iterate lies near a stationary point or a poor initial guess is given, the method can fail. For this reason we implement the hybrid method, which combines the possible quadratic convergence properties of Newton-Raphson with the guaranteed linear convergence of the bisection method. Both Brent's method and the hybrid method require the root to be bracketed, and therefore may require additional function evaluations in order to obtain the bracket. We compare these methods in figure \ref{fig:root_finding}, using the W7-X standard configuration considered in section \ref{sec:local_sensitivity} with the full trajectory model and the discrete adjoint approach, beginning with an initial guess of $E_r = -10$ kV/m with bounds at $E_r^{\min} = -100$ kV/m and $E_r^{\max} =$ 100 kV/m. The root is located at $E_r =-3.56$ kV/m. For this example, the hybrid and Newton methods had nearly identical convergence properties, though the Newton method is less expensive as it does not require $J_r$ to be evaluated at the bounds of the interval. To obtain the same tolerance, the Newton method provided a 14\% savings in wall clock time over Brent's method. In the above discussion we have made the assumption that there is only one stable root of interest. Of course a given configuration may possess several roots, especially if the ions and electrons are in different collisionality regimes \citep{Hastings1985}. Multiple roots can be obtained by performing several root solves with different initial values and brackets, which could be trivially parallelized. Thus the adjoint method could still provide an acceleration in this more general case. \begin{figure} \centering \includegraphics[trim=1cm 6cm 2cm 7.5cm,clip,width=0.49\textwidth]{root_finding_Er-10.pdf} \caption{The ambipolar root is obtained with Brent, Newton-Raphson, and Newton hybrid nonlinear root solvers. The derivatives obtained with the adjoint method provide better convergence properties for the Newton methods.} \label{fig:root_finding} \end{figure} \subsubsection{Derivatives at ambipolarity} \label{sec:deriv_ambipolarity} The adjoint method described in section \ref{sec:adjoint_approach} assumes that $E_r$ is held constant when computing derivatives with respect to $\Omega$. However, $E_r$ cannot truly be determined independently from geometric quantities, as the ambipolar solution should be recomputed as the geometry is altered. It is therefore desirable to compute derivatives at fixed ambipolarity (fixed $J_r = 0$) rather than at fixed $E_r$. This is performed by solving an additional adjoint equation, \begin{gather} \mathbb{L}^{\dagger} q^{J_r} = \widetilde{J_r}, \label{eq:J_r_adjoint} \end{gather} in the continuous approach or \begin{gather} \left( \overleftrightarrow{\bm{L}} \right)^T \overrightarrow{\bm{q}}^{J_r} = \overrightarrow{\bm{J}_r}, \label{eq:J_r_adjoint_discrete} \end{gather} in the discrete approach. Details are described in appendix \ref{app:ambipolar}. It should be noted that by computing derivatives at ambipolarity we assume that a given moment $\mathcal{R}$ is a differentiable function of the geometry at fixed $J_r = 0$. That is, this method cannot be applied to cases in which a stable root disappears as the geometry varies. As this will occur at a stationary point of $J_r(E_r)$, this situation could be avoided within an optimization loop by computing derivatives at constant $E_r$ rather than constant $J_r$ if $|\partial J_r/\partial E_r|$ falls below a given threshold at ambipolarity. Although an additional adjoint solve is required, this method of computing derivatives at ambipolarity is advantageous as several linear solves are typically required to obtain the ambipolar root. A comparison of the computational cost between the adjoint method and forward difference method for derivatives at ambipolarity is shown in figure \ref{fig:cost_adjoint}. Here the full trajectory model is used, and the result for both the discrete and continuous adjoint methods are shown. For the finite difference derivative, the ambipolar solve is performed with the Brent's method at each step in $\Omega$. As in figure \ref{fig:computational_time}, we find that for large $N_{\Omega}$ the cost of the continuous and discrete approaches are essentially the same, as the cost is no longer dominated by the linear solve. When computing the derivatives at ambipolarity, both adjoint methods decrease the cost by a factor of approximately $200$ for large $N_{\Omega}$. In figure \ref{fig:ambipolar_benchmark} we show a benchmark between derivatives at ambipolarity, $(\partial \mathcal{R}/\partial B_{00}^c)_{J_r}$, computed with the discrete adjoint method and with forward difference derivatives. For the forward difference method, the Newton solver is used to obtain the ambipolar $E_r$ as $B_{00}^c$ is varied. As the forward difference step size $\Delta B_{00}^c$ decreases, the fractional difference again decreases proportional to $\Delta B_{00}^c)$ until it reaches a minimum when $\Delta B_{00}^c/B_{00}^c$ is approximately $10^{-4}$. In comparison with figure \ref{fig:benchmark_fixedEr}, we see that the minimum fractional difference is slightly larger at fixed ambipolarity than at fixed $E_r$, as the tolerance parameters associated with the Newton solver introduce an additional source of error to the forward difference approach. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=0.7cm 5.8cm 2cm 5.0cm,clip,width=1.0\textwidth]{computational_cost_ambipolar.pdf} \caption{} \label{fig:cost_adjoint} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 2cm 7cm,clip,width=1.0\textwidth]{B00_full_discrete_ambipolar.pdf} \caption{} \label{fig:ambipolar_benchmark} \end{subfigure} \caption{(a) The cost of computing the gradient $\partial \mathcal{R}/\partial \Omega$ at ambipolarity scales with $N_{\Omega}$, the number of parameters in $\Omega$. (b) The fractional difference between $\partial \mathcal{R}/\partial B_{00}^c$ at constant ambipolarity obtained with the adjoint method and with finite difference derivatives.} \end{figure} In figures \ref{fig:S_const_Er_particle} and \ref{fig:S_const_Jr_particle} we compare the sensitivity function for the particle flux, $S_{\Gamma_i}$, computed using derivatives at constant $E_r$ with that computed at constant $J_r$. Here derivatives are computed using the discrete adjoint method with full trajectories, and the sensitivity function is constructed as described in section \ref{sec:local_sensitivity}. The configuration and numerical parameters are the same as described in section \ref{sec:local_sensitivity}. At constant $J_r$ the large region of increased sensitivity on the outboard side that appears at constant $E_r$ remains, though the overall magnitude of the sensitivity decreases. Thus it may be important to account for the effect of the ambipolar $E_r$ when optimizing for radial transport. In figures \ref{fig:S_const_Er_bootstrap} and \ref{fig:S_const_Jr_bootstrap} we perform the same comparison for $S_{J_b}$, finding the derivatives at fixed $E_r$ and at fixed $J_r$ to be virtually identical. This is to be expected, as numerical calculations of neoclassical transport coefficients for W7-X have found that the bootstrap coefficients are much less sensitive to $E_r$ than those for the radial transport (figures 18 and 26 in \cite{Beidler2011}). Furthermore, the bootstrap current in the $1/\nu$ regime is independent of $E_r$, and the finite-collisionality correction is small for optimized stellarators, such as W7-X \citep{Helander2017}. Therefore, the ambipolarity corrections to the derivatives are less important for $J_b$ than for the radial transport. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 1cm 6cm,clip,width=1.0\textwidth]{S_const_Er_particle.pdf} \caption{} \label{fig:S_const_Er_particle} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 1cm 6cm,clip,width=1.0\textwidth]{S_const_Jr_particle.pdf} \caption{} \label{fig:S_const_Jr_particle} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 1cm 6cm,clip,width=1.0\textwidth]{S_const_Er_bootstrap.pdf} \caption{} \label{fig:S_const_Er_bootstrap} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[trim=1cm 6cm 1cm 6cm,clip,width=1.0\textwidth]{S_const_Jr_bootstrap.pdf} \caption{} \label{fig:S_const_Jr_bootstrap} \end{subfigure} \caption{The sensitivity function for the ion particle flux, $S_{\Gamma_i}$, is computed at (a) constant $E_r$ and (b) constant $J_r$. Similarly, $S_{J_b}$ is computed at (c) constant $E_r$ and (d) constant $J_r$.} \end{figure} \section{Conclusions} We have described a method by which moments $\mathcal{R}$ of the neoclassical distribution function can be differentiated efficiently with respect to many parameters. The adjoint approach requires defining an inner product from which the adjoint operator is obtained. We consider two choices for this inner product. One choice corresponds with computing the adjoint of the linear operator after discretization, and the other corresponds with computing it before discretization. In the case of the former, the Euclidean dot product can be used, and in the case of the latter, an inner product whose corresponding norm is similar to the free energy norm \eqref{eq:inner_product} is defined. In section \ref{sec:implementation}, we show that these approaches provide the same derivative information within discretization error, as expected. Both methods provide reduction in computational cost by a factor of approximately $50$ in comparison with forward difference derivatives when differentiating with respect to many ($\mathcal{O}(10^2)$) parameters. In section \ref{sec:deriv_ambipolarity} the adjoint method is extended to compute derivatives at ambipolarity. This method provides a reduction in cost by a factor of approximately $200$ over a forward difference approach. We have implemented this method in the SFINCS code, and similar methods could be applied to other drift kinetic solvers. In this work we consider derivatives with respect to geometric quantities that enter the DKE through Boozer coordinates. However, the adjoint neoclassical method we have described is much more general, allowing for many possible applications. For example, derivatives of the radial fluxes with respect to the temperature and density profiles could be used to accelerate the solution of the transport equations using a Newton method \citep{Barnes2010}. The transport solution could furthermore be incorporated into the optimization loop to self-consistently evolve the macroscopic profiles in the presence of neoclassical fluxes. Rather than simply optimizing for minimal fluxes, an objective function such as the total fusion power could be considered \citep{Highcock2018}, with optimization accelerated by adjoint-based derivatives. Another application of the continuous adjoint formulation is correction of discretization error. The same solution obtained in section \ref{sec:continuous} can be used to quantify and correct for the error in a moment, $\mathcal{R}$, providing similar accuracy to that computed with a higher-order stencil or finer mesh without the associated cost. This method has been applied in the field of computational fluid dynamics by solving adjoint Euler equations \citep{Venditti1999,Pierce2004} and could prove useful for efficiently obtaining solutions of the DKE in low collisionality regimes. In section \ref{sec:vacuum_opt} we have shown an example of adjoint-based neoclassical optimization, where the optimization space is taken to be the Fourier modes of the field strength on a surface, $\{B_{mn}^c\}$. While optimization within this space is not necessarily consistent with a global equilibrium solution, it demonstrates the adjoint neoclassical method for efficient optimization. In section \ref{sec:equilibria_opt}, two approaches to self-consistently optimize MHD equilibria are discussed. Further discussion and demonstration of these approaches will be provided in a future publication. In appendix \ref{app:symmetry} we show that when $E_r = 0$ and the unperturbed geometry is stellarator symmetric, the sensitivity functions for moments of the distribution function are also stellarator symmetric. However, when $E_r \neq 0$ this is no longer true. This implies that obtaining minimal neoclassical transport in the $\sqrt{\nu}$ regime may require breaking of stellarator symmetry. In this work we have ignored the effects of stellarator symmetry-breaking, though we hope to extend this work to study these effects in the future. \section*{Acknowledgements} The authors acknowledge helpful discussion with L.-M. Imbert-G\'{e}rard and assistance with the STELLOPT code from S. Lazerson. This work was supported by the US Department of Energy through grants DE-FG02-93ER-54197 and DE-FC02-08ER-54964. The computations presented in this paper have used resources at the National Energy Research Scientific Computing Center (NERSC). Support for IGA for the initial work was provided by the Chalmers University of Technology, under the auspices of the Framework grant for Strategic Energy Research (Dnr. 2014-5392) from Vetenskapsr{\aa}adet.
{ "timestamp": "2019-06-04T02:14:03", "yymm": "1904", "arxiv_id": "1904.06430", "language": "en", "url": "https://arxiv.org/abs/1904.06430" }
\section{The intracluster medium as a magnetized plasma} The space between cluster member galaxies is filled with hot and diffuse ionized plasma that emits X-rays through thermal bremsstrahlung and atomic lines. We often treat it as a fluid with transport coefficients, where the particle-magnetic field interactions and plasma instabilities determine the effective transport coefficients. It is critical to understand the transport processes in the ICM on a variety of physical scales: microscopic plasma physics impacts many macroscopic processes, e.g., the dissipation and redistribution of the kinetic energy released by mergers or AGN outbursts (Fabian et al.\ 2005), the cooling and heating in cluster cores, and gas stripping from galaxies (Nulsen 1982). Furthermore, large cosmological simulation is our primary way to understand the baryonic processes of the Universe. However, all the current major cosmological simulations assume the interstellar medium, intracluster medium, and large scale gas to be inviscid (Figure~\ref{fig:tng}) i.e., illustris project (Vogelsberger et al.\ 2014), the EAGLE project (Schaye et al.\ 2014) (although the numerical viscosity is non-zero and depends on the resolution of the simulation). To fully understand feedback, galaxy evolution, and structure formation etc., it is crucial to constrain the microphysical properties of the astrophysical plasma including viscosity, thermal conductivity, the strength and topology of the magnetic field, and small-scale turbulence. X-ray observations have been used to probe the physical conditions in the ICM. For example, turbulent velocity has been determined indirectly via resonance scattering and surface brightness fluctuations (Ogorzalek et al.\ 2017; Gu et al.\ 2018; Zhuravleva et al.\ 2015). The microcalorimeter on board Hitomi has measured the level of turbulence directly using line broadening (Hitomi collaboration 2016). The {\sl Chandra} X-ray Observatory (half arcsec spatial resolution) has revealed the ubiquity of ``cold fronts" in the ICM (see Markevitch \& Vikhlinin 2007 for a review), which can be used to constrain the viscosity and magnetic field. Cold fronts are sharp interfaces between cooler, denser, hence brighter, gas and hotter, lower density, hence fainter, medium, where the pressure is continuous. They result from merger activity, either created directly by the infall of a low entropy subcluster (merging cold front), or from gas motions induced by the gravity of an infalling subcluster (sloshing cold fronts). For purely hydrodynamic interactions, Kelvin-Helmholtz Instability (KHI) is expected to develop at shearing interfaces. However, magnetic fields or viscosity can suppress the KHI (Chandrasekhar 1961; Lamb 1932). State of the art simulations have demonstrated that cold fronts appear smoother in the presence of either magnetic field or viscosity at the levels expected in the ICM (see ZuHone \& Roediger 2016 for a review). Recent deep {\sl Chandra} observations have been dedicated to revisiting cold fronts identified by previous observations, leading to a deeper understanding of the microphysics of the ICM. Multiple-edge structures in X-ray surface brightness have been identified at the cold fronts in a growing number of systems (Werner et al.\ 2016; Su et al.\ 2017a; Ichinohe et al.\ 2017; Ichinohe et al.\ 2018). These features are consistent with the presence of KHI eddies and an inviscid ICM (Figure~\ref{fig:test}-top). Walker et al.\ (2017) studied the sloshing cold fronts in a number of clusters (Perseus, Centaurus and Abell 1795) and identified concave `bay' substructures from X-ray and radio imaging. These features resemble the giant KHI rolls expected when the ratio of thermal to magnetic pressure is $\beta=200$ (Figure~\ref{fig:test}-middle). \begin{figure} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=6.3cm}}]{figure}[\FBwidth] {\caption{Large-scale Gas Distribution of IllustrisTNG: the Illustris follow-up simulation (\url{www.tng-project.org}). All the current major cosmological simulations assume viscosity to be zero. It is critical to measure the viscosity of the gas in galactic systems and large scale structures to truly understand the physical processes that drive the cosmic evolution.}\label{fig:tng}} { \includegraphics[width=0.5\textwidth]{Tnggas.png}} \vspace{-0.2cm} \end{figure} In addition to cold fronts, stripped tails are another product of ongoing infall in galaxy clusters and can be used to constrain micro physics of the ICM (Roediger et al.\ 2015a,b). For example, a temperature gradient was detected in the stripped tail in NGC~1404, indicating a well mixed plasma of a low viscosity (Su et al.\ 2017a; Sheardown et al.\ 2018). The temperature map derived with the deep {\sl Chandra} observations is remarkably similar to that derived from an inviscid simulation specifically tailored to the infall of NGC~1404 (Figure~\ref{fig:test}-bottom). Results from deep X-ray observations of the nearby early-type galaxy NGC~4552 were compared with the viscous and inviscid hydrodynamic simulations specifically tailored to the stripping of this galaxy (Roediger et al.\ 2015a,b; Kraft et al.\ 2017). Inviscid stripping was favored by the study. Based on the survival of the stripped low-entropy infalling group in the hot cluster Abell~2142, Eckert et al.\ (2014, 2017) find thermal conduction to be strongly suppressed. Note that the properties of cold fronts and stripped tails also depend on the merger history (Su et al.\ 2017d; Kraft et al.\ 2017). It is important to understand the entire dynamic state of the system before any conclusions can be drawn from the immediate observables. \section{Measuring gas motions in galaxy clusters} The ICM is always in a dynamically active state. Forming at the knots of the cosmic filaments, galaxy clusters grow continually via mergers of subclusters and accretions of galaxies. In addition, AGN periodically release mechanical energy at cluster centers. Both effects are expected to cause shocks, bulk motions, and turbulence in the ICM over a large span of physical scales. To date, the most practical way of probing the ICM velocity is based on the Rankine-Hugoniot jump conditions: the gas properties on both sides of a shock wave can be used to infer the infalling speed of a substructure (e.g, Su et al. 2016; Vikhlinin et al. 2001; Markevitch et al. 2002; Zhang et al.\ 2019). For infalling substructures, Su et al.\ (2017b) has developed an analytical method combining the jump conditions and the Bernoulli equation, which allows us to derive a general speed, the complete velocity field, and the distance of the substructure from the pressure distribution. The motion of gas sloshing can also be determined via the pressure distributio. Ueda et al.\ (2018) analyzed the cool core cluster RX J1347.5-1145. They found that while the residual X-ray image derived from {\sl Chandra} shows a clear spiral pattern characteristic of gas sloshing, no significant variation is shown in the Sunyaev-Zel'dovich effect image derived from {\sl ALMA}. This study has confirmed the subsonic nature of sloshing gas predicted by simulations (e.g., Roediger et al.\ 2011; ZuHone et al.\ 2010). The line-of-sight gas motion can be measured directly from line centroids based on the Doppler effect. The line centroids measurements have been performed in a number of galaxy clusters with CCDs (Dupke \& Bregman 2006; Ota et al.\ 2007; Tamura et al.\ 2011; Ueda et al.\ 2019). However, these results are mostly upper limits or of low significance due to the poor spectral resolution of CCDs ($\Delta E\sim150 {\rm eV}$). In contrast, the microcalorimeter on board {\sl Hitomi} provides a spectral resolution of 5\,eV. Based on the FeXXV K$\alpha$ line at 6.7\,keV, {\sl Hitomi} accurately measures a bulk motion of 150\,km/s for the gas at the center of the Perseus Cluster. Although this is the only ICM measurement made by {\sl Hitomi} due to its untimely loss, it has demonstrated that calorimeter science can revolutionize our view of the gas dynamics of the ICM. \begin{figure} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=5.8cm}}]{figure}[\FBwidth] {\caption{ {\bf Top:} The {\sl Chandra} X-ray image of Abell~3667 (left) was compared with the simulated gas stripped galaxy for an inviscid atmosphere (right). The figure is taken from Ichinohe et al.\ (2017). {\bf Middle:} The {\sl Chandra} X-ray image of the Perseus cluster (left) was compared with the simulated cluster with an initial magnetic field of $\beta=200$ (right). The figure is taken from Walker et al.\ (2017). {\bf Bottom}: The temperature map of the infalling galaxy NGC 1404 obtained with deep Chandra observations (left, Su et al.\ 2017a) was compared with the temperature map produced by the numerical simulation specifically tailored to the case of NGC 1404 falling into the Fornax Cluster (right, Sheardown et al.\ 2018).}\label{fig:test}} { \includegraphics[width=0.5\textwidth]{Figure}} \vspace{-0.2cm} \end{figure} \section{What can we learn with future X-ray telescopes?} The biggest change promised by future missions is to be able to measure the kinematics of the intracluster medium directly with calorimeters at high spatial resolution. Thanks to the hierarchical formation process, massive clusters are the rare breed in the Universe. Galaxy groups and low mass clusters are much more common, for which the measurement of FeXXV K$\alpha$ line at 6.7\,KeV is not available. Instead, the centroid and broadening of the O VIII line at $18.967\mathring{A}$ (0.654\,keV), which is bright and isolated for the hot atmospheres of low mass clusters and galaxies, can be used to map the bulk speed and turbulence of the gas. Using the FLASH simulation (Sheardown et al.\ 2018), sixte 1.3.6 and SOXS 2.2.0, we simulated a 200\,ksec observation of a nearby X-ray bright galaxy like NGC~1404 using an instrument with an effective area of 1.4\,$m^2$ and with a superb spectral resolution, similar to the X-ray Integral Field Unit (X-IFU) on board Athena (Barret et al.\ 2018). A simulated image and an example spectrum is shown in Figure~\ref{fig:6}-left, demonstrating its 2.5 eV spectral resolution with $5^{\prime\prime}$ pixels. We have also simulated a $3\times3$ mosaic of 30\,ksec observations for an instrument with a large field of view and low and stable background resembling the wide field X-ray imager (WFI) on board Athena as shown in Figure~\ref{fig:6}-right. We can capture the entire ICM of a cluster like Fornax out to (and even beyond) the virial radius and detect fainter sloshing cold fronts at larger radii, which is not feasible with existing missions for a reasonable exposure time. We have selected regions of interest to directly measure their gas motions and turbulence with X-IFU simulations We can further identify uneven edges at the cold front and turbulent regions in the path of the infalling object. The details of these faint structures match the resolution of the simulations, which will transform our understanding of the microphysics over a large span of radii the ICM. Gas distribution at large radii will also pin down the initial conditions for the simulation, maximizing the science return from the tailored simulations. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{spectrum} \includegraphics[width=0.42\textwidth]{fornax_wfi} \caption{\footnotesize \label{fig:6} {\bf Left:} Simulated 200\,ksec X-IFU observation of NGC~1404 in the 0.5--2.0 keV energy band. The O VIII line is relatively isolated and sufficiently bright, which can be used to constrain gas dynamics in low mass clusters and galaxies. {\bf Right:} Simulated WFI 3X3 30\,ksec mosaic observations of Fornax. Subtle substructures and the cold front at large radii can be detected.} \vspace{-0.4 cm} \end{figure} \section{Concluding Remarks} The vast bulk of the hot baryons in the Universe is in the form of the intracluster medium, a hot diffuse plasma emitting X-ray via thermal bremsstrahlung and atomic lines. With {\sl Chandra} and {\sl XMM-Newton} observations and (magneto-)hydrodynamic simulations, tremendous progress has been made in understanding how clusters are assembled and how energy is transported in the ICM. Future missions such as XARM, Athena, and Lynx will be equipped with calorimeters, allowing us to measure gas motions and turbulence directly and to put quantitative constraints on the transport coefficients in their ICM. By the end of the 2020's, hundreds of thousands galaxy clusters will be detected by various cluster surveys such as eROSITA (Pillepich et al.\ 2018). Most of them are expected to be low mass clusters. The O VIII line is relatively isolated and sufficiently bright, which can be used to constrain gas dynamics in low mass clusters and galaxies. Through a connection between the micro and macro scale astrophysics in the ICM, our knowledge of hydrodynamics in galaxy clusters will be revolutionized. \pagebreak
{ "timestamp": "2019-04-16T02:19:32", "yymm": "1904", "arxiv_id": "1904.06739", "language": "en", "url": "https://arxiv.org/abs/1904.06739" }
\section{Introduction} The possibility of employing the spatial degree of photons for communications is gaining interest in recent years due to its unbounded dimensionality \cite{wang2012terabit,nagali2009quantum,bozinovic2013terabit,krenn2014generation}. A natural basis to span the transverse profile of photons is comprised of Laguerre-Gaussian (LG) modes which are characterized with two topological numbers: $l\in \mathbb{Z}$, the orbital index, describing the orbital-angular-momentum (OAM) in units of $\hbar$ per photon in the beam and $p\in \mathbb Z_+$, which is the radial index or radial quantum number. Essential for utilizing LG modes is the ability to perform mode-sorting, or demultiplexing on the incoming physical data flow. There are, essentially, two approaches to demultiplexing. The first approach uses (usually complicated) optical setups in which the $l$ and $p$ degrees of freedom are coupled to other degrees of freedom such as angle of propagation and polarization. Most such methods address either the OAM \cite{lightman2017miniature, doster2017machine, lohani2018use} or the radial index \cite{gu2018gouy, zhou2017sorting} degrees of freedom, while a recent measuring method handles both degrees \cite{bouchard2018measuring}. The second approach, which emerged recently, suggests using just a camera to detect the intensity of the incoming light beam and to utilize a deep neural network (DNN) to classify the beam. To date, demonstrated DNN-based demultiplexers addressed solely the OAM degree of light. Here, we present, experimentally, a DNN-based mode sorting of both topological numbers of LG modes able to classify both the OAM and radial index. Our solution uses two concatenated DNNs. One network is used for mode classification and it is trained on numerically generated "ideal" images of LG modes and two-modes superpositions. The other network is a calibration network which converts experimentally detected images, that suffer from optical aberrations and noise, to ideal numerical images, which are then fed to the classifying network. \section{Methods} \subsection{Data generation} Laguerre-Gauss modes are solutions of the paraxial wave equation in cylindrical coordinates. They are given with \cite{gu2018gouy}: \begin{multline} LG_{l,p} (r, \phi, z) = \sqrt{\frac{2 p!}{\pi(p + |l|)!}} \frac{1}{w_z} \bigg(\frac{\sqrt{2}r}{w_z}\bigg)^{|l|} L_p^{|l|} \bigg(\frac{2r^2}{w_z^2}\bigg) \\ exp \bigg( -\frac{r^2}{w_z^2} + i (\frac{kr^2}{2 R_z} + l\phi - (2p + |l| + 1) \varphi_g)\bigg), \label{eq:lauggere_gauss_mode} \end{multline} where $l$ and $p$ are the orbital and radial indices respectively, $L_p^{|l|}$ are the Laguerre polynomials, $w_z = w_0 \sqrt{1 + (z/z_R)^2}$ is the beam waist with $w_0$ being the waist at $z=0$, $z_R = (\pi w_0^2)/ \lambda$ is the Rayleigh range , $R_z = z(1 + (z_R/z)^2)$ is the radius of curvature, $\lambda$ is the wavelength, $k=2\pi/\lambda$ is the wave number and $\varphi_g = arctan(z/z_R)$ is the Gouy phase. Our work uses both numerically and experimentally generated LG modes and their superpositions. Experimentally generated data was acquired in a setup consisting of a 532nm CW laser (Quantum Ventus 532 Solo Laser) which is expanded and collimated before reflecting off a phase-only Spatial Light Modulator (Holoeye Pluto SLM). The phase masks loaded onto the SLM were encoded by extracting the phase of our numerically generated suprimposed modes and then adding a blazed grating to it. The resulting image in the first order of diffraction of the grating is Fourier transformed using a 50cm lens and imaged by a camera (DataRay WinCamD-LCM4). The experimentally generated data is different than the numerically generated data of ideal modes and their superposition due to inherent aberrations and noise in the optical system. Different datasets, each with a different number of superimposed modes were created. We mark the datasets with $DB_N^{type}$ where $N=1,2,3$ is the number of superimposed modes and $type \in \{num,exp\}$ stands for numerically or experimentally generated dataset. Each member of the datasets was realized according to $\frac{1}{\sqrt{N}} \sum_{n=1}^{N}LG_{l_n,p_n}$ where $l_n$ and $p_n$ are the orbital and radial indices, that are being used in a particular superposition. All images are set to be of size 224x224 pixels with 256 values per pixel in the range of [-1,1] while for the $exp$ datasets the average pixel value (between all images) was set to 0 and the variance to 1 (we note that experimentally acquired pictures are first obtained at a size of 500x500 with pixel values in the range of [0,255]). Initially, the datasets $DB_N^{num}, N=1,2,3$, contain each $36^N$ members, while $DB_2^{num}$ contains $DB_1^{num}$ and $DB_3^{num}$ contains both $DB_2^{num}$ and $DB_1^{num}$. For training a DNN, very large datasets are required. For this reason the datasets are "augmented" with new members generated from the old ones. Specifically for $DB_1^{num}$ and $DB_2^{num}$ we augment the basic training dataset of images by adding 70 image variation per image in the basic set through beam rotations (Rotation angles were uniformly distributed over $[0,2\pi]$ with 1 deg$\simeq 17$mrad resolution), beam shifts (uniformly distributed over [0,16] pixel shifts in both x and y coordinates) and addition of Gaussian noise with mean $\mu = 0$ and variance $\sigma^2 = 0.2$. This amounts to overall $\sim2,500$ and $\sim90,000$ samples for $DB_1^{num}$ and $DB_2^{num}$, respectively. In $DB_3$ the number of augmentations per unique combination was $2$ leading a total dataset size of $\sim 90,000$. Similarly, $DB_1^{exp}$ and $DB_2^{exp}$ were generated and numerically augmented 100 fold by applying similar rotations and random noise (with no beam shifts). \subsection{Network Architecture} Our solution to the Beam Profiler Network (BPNet) consists of two concatenated networks, trained separately: a calibration network and a classifier network. Both networks were created using keras API \cite{chollet2015keras}. The calibration network, based on U-Net's architecture \cite{ronneberger2015u}, converts LG beam images taken in the lab (both single-mode and superpositions) into ideal images of the same LG beams without changing the overall size of the image. The output of the network for each image, which is also the label during training, is an image of the same LG mode (or superposition of LG modes) albeit an ideal one, created using a simulation. Since large parts of each image in the dataset are dark, a simple Mean-Square-Error (MSE) or Mean-Absolute-Error (MAE) loss functions cannot allow the calibration network to converge properly as with such loss functions the network converges to a poor local minimum, predicting only dark images. To solve this convergence issue, we introduce here a new type of loss function, called "Histogram Weighted Loss" (HWL). This loss gives higher significance to pixel values that are less common in the image, since they are the ones carrying important information in sparse images. To implement this cost function we first calculate the histogram of each image and modify the calculation of the regular MAE loss during training, by multiplying the loss of each pixel by 1 minus the pixel probability in the image (which is determined by the histogram). In this case, a wrong prediction of a less common pixel will have a higher cost. The Histogram Weighted Loss is given by the following equation: \begin{equation} HWL = \frac{1}{N \times M}\sum_{j=1}^{M}\sum_{i=1}^{N}(1-prob_{i,j})^\gamma |y_{i,j} - y_{i,j}^p| \label{eq:hist_loss} \end{equation} Where $N$ is the number of pixels in an image, $M$ is the number of images in a given batch, $y_{i,j}$ is the value of pixel $i$ in the $j$th (target) image in the batch, $y_{i,j}^p$ is the prediction for the value of the same pixel generated by the network, $prob_{i,j}$ is the probability for said pixel value in the target image (extracted from the histogram) and $\gamma=4$ is a hyper-parameter which we found by trial and error to produce the best results. Histogram Weighted Loss is somewhat related to the concept of Focal Loss \cite{lin2017focal} in which whole images are given different significance for a specific loss calculation. The classifier network classifies images fed by the calibration network according to the values of the index numbers of the modes comprising the images. The classifier network is trained using simulated images of LG beams (both single-mode and superpositions). This network is based on Mobilenet V2's architecture \cite{sandler2018mobilenetv2}, where the last fully-connected layer outputs 36 labels ((p,l)=(0,0),(1,0),...(5,5)). We refer to the set of these 36 labels as the "modes vector". Each label output is set in the range [0,1] indicating the probability for successful detection of a specific mode. In the results section below, we decide on a mode being detected if the appropriate value in the modes vector is higher than 0.5. The input to this network is a numerically calculated image of an LG beam, or a superposition of such beams, and its output is a labeling of the different modes it contains. \section{Results} The classifier network was first tested in a stand-alone configuration, by being fed directly by numerically generated and augmented datasets $DB_N^{num}, N=1,2,3$. We split our three datasets into training (85\%) and validation (15\%) sets. The validation scored a perfect 100\% success rate in all three cases. This shows that the network architecture we used was adapted effectively enough so as to learn even small datasets ($DB_1$) and datasets without a lot of repetition in the samples ($DB_3$). It is notable that when noisy but otherwise undistorted images are supplied to the classifier network, it exhibits very high performance. However, when testing the classifier network on experimental data that was measured in the lab, the inherent aberrations (in any optical system) degrades the performance considerably (see e.g. Ref.\cite{lohani2018use}). At this point we can choose between two strategies. One option is training the classifier network on experimental data (a strategy that was adapted for example in Ref.\cite{doster2017machine}) which has two problems - the performance would rely on the amount of distortions (aberrations) in the optical system and the solution is applicable to a specific optical setup. The second strategy, that we chose, is using a calibration network which let us use a high performance classifier network that could work in principal with any setup-specific calibration network. The performance of the whole system thus becomes dependent mostly on the quality of the calibration network, whose performance depends in turn on the quality of the optical system. The next stage was training the calibration network and then testing the whole BPNet (calibration+classifier). For this purpose we divided $DB_1^{exp}$ and $DB_2^{exp}$ to a training set (72\%), a validation set (18\%) and to a test set (10\%). The whole network was tested in several different training configurations as described in Table.\ref{table: results}. The main conclusions from these results is that we were able to get state-of-the-art real-world single-mode detection and two-mode superposition demultiplexing. It is noteworthy that single-mode training yield a perfect performance although the dataset for training was relatively small. The relatively low performance for demultiplexing two-state superpositions when the classifier network was trained with three-term superpositions is attributed to the small level of augmentation in $DB_3^{num}$. A few examples of successful demultiplexing-detection by the BPNet for two-state superpositions are shown in Fig.\ref{fig:successful_SP2}. In this figure we show the phase masks that were applied to the SLM in the experimental setup, the input to the network which is the image captured by the camera, the prediction of the calibration network which is fed to the classification network, as well as the ground truth for that network. An interesting case is shown in Fig.\ref{fig:successful_SP2}(d) in which the calibration network added an artifact to the prediction but still the resulting classification was correct. This shows that the classiffier network has some degree of robustness to artifacts introduced by the calibration network. Examples of some unsuccessful results are shown in Fig.\ref{fig:unsuccessful_SP2}. In these cases, it is clear that the images captured (using an automated procedure) by the camera are distorted and clouded. Even though the results were misclassified for these cases, we can appreciate that the calibration network locked on to some features in the input images. Explicitly, we can observe that in all images (except Fig.\ref{fig:unsuccessful_SP2}(a)), even though the ground truth and the captured image look similar, some additional artifacts were introduced to the image and so the input was misclassified. In Fig.\ref{fig:unsuccessful_SP2}(a) the classification network adds an additional mode to the label of the image, due to a deformation in the outer ring. In Fig. \ref{fig:unsuccessful_SP2}(b) the calibration network converts the input image into a completely different superposition of modes. In Fig.\ref{fig:unsuccessful_SP2}(c) a spurious ring appears again but this time instead of adding another mode it simply increases the OAM index for the first mode and the radial index for the second mode. In Fig.\ref{fig:unsuccessful_SP2}(d) the input looks similar to the ground truth image, but the calibration network (probably due to added noise) predicts an image with a closed inner ring, therefore leading to the omission of one of the superimposed modes. Still, it is noteworthy that, in all cases, the predicted images were close to the ground truth images. Finally, it can be suggested that the calibration network did not actually learn to transform its input images to undistorted images (undoing the optical aberrations in the experimental setup) but that it simply learned a mapping between the input and output images. To refute this argument we fed a few random images to the calibration network (see Fig. \ref{fig:random_input}) where we observe that the output of the calibration network would not simply map any image with some features to a multiplexed mode, but instead it does learn some kind of a transfer function (albeit it might be restricted to work correctly with the LG modes fed to the network). \section{Conclusions} We have realized a novel method for spatial-modes de-multiplexing, relating to the two topological numbers characterizing Laguerre-Gaussian modes using a flow of two concatenated deep neural networks: a calibration network (transferring from experimentally acquired images in the lab to "ideal" images) and a classifier network. We have shown that our classifier is able to demux up to three superimposed spatial-modes with perfect accuracy, while we demonstrated that the whole flow exhibits state-of-the-art performance for detecting two-mode superpositions acquired in the lab. An important ingredient in this work is the introduction of the "Histogram Weighted Loss" which helps handle sparse images where most pixels do not carry information. This loss function might be relevant to other fields that encounter sparse images such as medical imaging and astrophysics. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth,trim={3cm 8cm 4cm 7.3cm},clip]{figure1.pdf} \caption{\textbf{Successful classification of two-modes superpositions:} Some examples of successful calibration and classification of modes using the whole BPNet flow. The "Mask" column shows phase masks that where loaded onto the SLM. The "Input" column shows images captured by the camera. The "Prediction" column shows the output of the calibration network while the "Ground Truth" column shows the projected output for a perfect calibration. The superpositions $((p_1,l_1),(p_2,l_2))$ demonstrated are: \textbf{(a)} ((4,0),(2,5)) \textbf{(b)} ((1,5),(0,0)) \textbf{(c)} ((5,1),(0,4)) \textbf{(d)} ((0,4),(0,5)). } \label{fig:successful_SP2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth,trim={3cm 8cm 4cm 7.3cm},clip]{figure2.pdf} \caption{\textbf{Unsuccessful calibration and classification of two-modes superpositions:} Some examples of unsuccessful calibration and classification of the whole BPNet flow. The "Mask" column shows phase masks that where loaded into the SLM. The "Input" column shows images captured by the camera. The "Prediction" column shows the output of the calibration network while the "Ground Truth" column shows the projected output for a perfect calibration. The superpositions demonstrated and the unsuccessful predictions are : \textbf{(a)} ((1,2),(2,2)) \protect$\to$ ((1,2),(2,2),(3,2)) \textbf{(b)} ((5,5),(0,0)) \protect$\to$ ((5,5),(0,2)) \textbf{(c)} ((0,0),(2,0)) \protect$\to$ ((1,0),(2,1)) \textbf{(d)} ((4,5),(2,3)) \protect$\to$ ((4,5)). } \label{fig:unsuccessful_SP2} \end{figure} \begin{table}[h!] \centering \begin{tabular}{ | m{7em} | m{6em}| m{6em} | m{3em} | } \hline Test-Set & Calibration network DB & Classification network DB & Results \\ \hline Single Mode & $DB_1^{exp}$ & $DB_1^{num}$ & $100\%$ \\ \hline Single Mode & $DB_1^{exp}$ & $DB_2^{num}$ & $96.39\%$ \\ \hline Single Mode & $DB_1^{exp}$ & $DB_3^{num}$ & $86.39\%$ \\ \hline 2 Modes Superposition & $DB_2^{exp}$ & $DB_2^{num}$ & $91.3\%$ \\ \hline 2 Modes Superposition & $DB_2^{exp}$ & $DB_3^{num}$ & $63.45\%$ \\ \hline \end{tabular} \caption{Results for different training configurations. "DB" stands for the training set being used as detailed in the text.} \label{table: results} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth,trim={3cm 12cm 10cm 8cm},clip]{figure3.pdf} \caption{\textbf{Random inputs:} Some examples of random inputs to the calibration network and their predictions. } \label{fig:random_input} \end{figure} \pagebreak \bibliographystyle{unsrt}
{ "timestamp": "2019-04-16T02:19:18", "yymm": "1904", "arxiv_id": "1904.06735", "language": "en", "url": "https://arxiv.org/abs/1904.06735" }
\section{Introduction} Differential equations, including ordinary differential equations (ODEs) and partial differential equations (PDEs), are key mathematical models for various physics and engineering applications. In most situations, it is impractical to find analytical solutions, and numerical solutions become increasingly popular for these problems. When solving ODE/PDEs, one seeks for a function satisfying both (1) the differential equations within the domain, and (2) all initial/boundary conditions. Common numerical methods for ODEs are Runge-Kutta methods, linear multistep methods, and predictor-corrector methods \cite{burden2001numerical}. As for PDEs, tremendous methods for discretizing the physical space or spectral space are developed, and the most common choices are finite difference method (FDM), finite volume method (FVM), finite element method (FEM), and spectral method. These methods are special cases of weighted residual method. Galerkin method is another numerical method based on weighted residual method for converting a continuous operator to a discrete form. It applies the method of variation of parameters to a function space and converts the original equation to a weak formulation. In the present study, we take advantage of the fast developing machine learning technique and propose a framework of solving ODE/PDE by applying the variation of parameters of a neural network \cite{raissi2017physics,lagaris1998artificial}. Neural network (NN) is inspired by the complex biological neural network, and is now a computing system wildly applied in machine learning \cite{haykin2009neural,krizhevsky2012imagenet,lecun2015deep}. Feedforward network with full connection between neighboring layers is one of the first models introduced \cite{mcculloch1943logical,rosenblatt1958perceptron}, and the algorithms evaluating and training them have been studied since then \cite{rosenblatt1958perceptron,rosenblatt1962principles,minsky1969perceptrons, werbos1974beyond}. Besides applications in image recognition \cite{krizhevsky2012imagenet}, natural language processing \cite{lecun2015deep}, cognitive science \cite{lake2015human}, and genomics \cite{alipanahi2015predicting}, neural network is also a powerful tool for function approximation \cite{hornik1989multilayer,cybenko1989approximation, jones1990constructive,carroll1989construction,liu2019neural}. It is proved that functions in the form of multilayer feedforward network (MFN) is dense in function spaces such as $C(I)$ and $L^2(I)$ ($I$ is unit cube\footnote{The unit cube on $\mathbb{R}^n$ is defined as $[0, 1]^n$.}). It is also easily shown that increasing the layers of MFN will enormously increase its function approximation capability. However, deep neural networks are difficult to train with gradient methods such as backpropagation due to gradient vanishing \cite{hochreiter1991untersuchungen,hochreiter2001gradient}. Here four-layer feedforward networks are chosen to avoid using special techniques such as ResNet \cite{he2016deep}. Due to the ability of NN in function approximation, lots of efforts have been made to construct ODE/PDE solvers based on NN \cite{mall2014chebyshev,berg2017unified,raissi2017physics}. One of the major difficulties in such solvers is how to train a particular NN to satisfy the boundary condition accurately, since that the original form of NN as a trial function does not match boundary condition like trial functions in Galerkin methods. One strategy is the penalty method \cite{raissi2017physics,liu2019neural,wei2018machine}. The penalty method has been applied successfully in Burgers equation \cite{raissi2017physics}, Laplace equation \cite{liu2019neural}, and diffusion equations \cite{wei2018machine}, but only limited accuracy can be achieved. Another issue is how to evaluate the derivatives in equations, which need to be compatible with the NN-based solver. One option is the so-called automatic differentiation (AD) \cite{raissi2017physics}. AD evaluates the derivative with respect to input variable of any function defined by a computer program, and it is done by performing a non-standard interpretation of the program: the interpretation replaces the domain of the variables and redefines the semantic of the operators \cite{rall1981automatic}. In our framework, we define a trial function which consists of a bulk term and a boundary term. The boundary term matches initial/boundary conditions, and the bulk term satisfies a reduced problem with relaxed boundary constrains, respectively. The boundary term can be construct explicitly. We then define for the bulk term a loss function, which is actually the residual of the reduced problem. Such loss function does not involve any boundary conditions since the boundary conditions are relaxed in the reduced problem. Finally the bulk term, and therefore the trial function, is determined by minimizing the loss function. Machine learning technique is used for this minimization of the loss function. We refer to this new strategy as the constrained multilayer feedforward network (CMFN) method. With such novel strategy we will show that much higher accuracy can be achieve. It should also be pointed out that any method can be used to minimizing the loss function. Before proceeding, we would like to clarify some terminology. In the language of the machine learning community, the trial function is usually called \emph{model}. The minimization of the loss function is actually a \emph{learning process}, during which the trial function \emph{learns} the correct data distribution of analytical solution. The minimization process is also a standard \emph{optimization} problem, and it is equivalent to \emph{training} in machine learning. Thus, the terminologies ``training'', ``optimization'', and ``minimization'' will be used interchangeably throughout this paper. The paper is organized as follows. In section~2 we describe the framework in detail. Section~3 presents some numerical examples. Finally section~4 concludes the paper. \section{Numerical Method} To solve ODEs/PDEs numerically, one finds a function which satisfies the differential equations inside the domain and all initial/boundary conditions at (temporal/spatial) boundaries. That is, two parts of information need to be transferred into the numerical solver. For instance, in FVM the former part of information is transferred by flux reconstruction and the later part by operations on the boundary cells, respectively. In CMFN method, the former part of information is transferred by directly applying the differential operators with AD technique. The initial/boundary conditions are dealt with the boundary term in the trial function. The CMFN method is based on the concept of MFN\@. MFN with $n$ layers can be defined as a computing algorithm as follows. The input layer as a vector is denoted by $y^{(1)}$, the output layer is denoted by $y^{(n)}$, and the hidden layers by $y^{(i)}$. The output layer $y^{(n)}$ is computed by hidden layer $y^{(n-1)}$: \begin{gather*} y^{(n)}_k = \sum_j \theta^{(n-1)}_{kj} y^{(n-1)}_j + \beta_k^{(n-1)}, \end{gather*} and the hidden layers are computed recursively by: \begin{gather*} \begin{cases} z^{(i+1)}_k = \sum_j \theta^{(i)}_{kj} y^{(i)}_j + \beta_k^{(i)} \\ y^{(i+1)}_k = \phi(z^{(i+1)}_k) \end{cases} \quad i = 1, 2, \ldots, n-2. \end{gather*} Explicitly, a three-layer feedforward network $N(x;\theta, \beta)$ is defined with superposition of activation function $\phi$ over linear transformation ($x=\{x_i\}_{n\times1}$ as input layer and $N(x; \theta, \beta) = \{N_k\}_{m\times1}$ as output layer): \begin{gather} \label{eq:3layerMFN} N_k = \sum_j \theta_{kj}^{(2)}\phi(\sum_i \theta_{ji}^{(1)}x_i+\beta_j^{(1)}) +\beta_k^{(2)}, \end{gather} and a four-layer MFN is \begin{gather} \label{eq:4layerMFN} N_l = \sum_k \theta_{lk}^{(3)}\phi( \sum_j \theta_{kj}^{(2)}\phi( \sum_i \theta_{ji}^{(1)}x_i+\beta_j^{(1)}) + \beta_k^{(2)}) + \beta_l^{(3)}. \end{gather} The parameters of the MFN are its weights $\theta = \{\theta_{ij}^{(k)}\}$ and biases $\beta=\{\beta_j^{(k)}\}$. It has been proved that the MFN \cref{eq:4layerMFN} with proper activation $\phi$ is dense in $C(I)$, namely the set of all continuous functions defined on unit cube \cite{cybenko1989approximation}. Therefore, for any continuous function $y(x)$ defined on a finite domain, one set of parameters $(\theta^*, \beta^*)$ can be found such that the corresponding network $N(x; \theta^*, \beta^*)$ is close enough to $y(x)$, i.e.\ the norm $\Vert y(x) - N(x; \theta^*, \beta^*)\Vert $ could be sufficiently small. Similar conclusion holds for $y \in L^2(I)$, which is $\int_I |y(x)|^2 \dd{x} <\infty$. Such properties guarantee that an optimal set of $N(x; \theta^*, \beta^*)$ exists with the corresponding MFN being a good numerical approximation of the solution. A well-posed ODE/PDE with Dirichlet boundary condition can be written as \begin{equation} \begin{cases} \mathcal{L}u = f &\qqtext{in} \Omega, \\ \mathcal{B}u = g &\qqtext{in} \partial \Omega. \end{cases} \label{eq:general-DE} \end{equation} In CMFN we define a model function: \begin{equation} \label{eq:CMFN-basic} \hat{y}(x; \theta, \beta) = G(x) + \tilde{N}(x;\theta,\beta) \equiv G(x) + w(x)\cdot N(x;\theta,\beta). \end{equation} As the boundary operator $\mathcal{B}$ is linear and algebraic, we choose the two terms $G(x)$ and $w(x)$ in \cref{eq:CMFN-basic} such that \begin{enumerate} \item $\mathcal{B}G=g$ when $x \in \partial \Omega $, \item $\mathcal{B}\tilde{N}\to0$ when $x\to\partial\Omega$. \end{enumerate} $G(x)$ is a boundary term which is a pre-defined function, $\tilde{N}(x)$ is bulk term, and $N(x)$ is the unknown part which is approximated by a neural network. The original problem with respect to $u(x)$ in \cref{eq:CMFN-basic} is reduced to solving a new differential equation with respect to $\tilde{N}(x)$. The the new equation is defined as the \emph{reduced equation}, and the unknown part $N(x)$ separated from bulk term is called \emph{reduced solution}. In \cref{eq:CMFN-basic}, as long as a pre-defined weight $w(x)$ is a bounded continuous function, the bulk term $\tilde{N}(x)$ is continuous and bounded according to \cref{eq:4layerMFN}. The bulk term is further written as $\tilde{N}(x) = w(x)\cdot N(x)$, where the pre-defined weight $w(x)$ satisfies: \begin{enumerate} \item $w(x)\to0$ as $x\to\partial\Omega$ (vanishing on domain boundary), \item for all $x^* \in\Omega$, $w(x^*)\neq 0$ (non-vanishing within domain). \end{enumerate} Now the boundary conditions are automatically satisfied, or we say, the boundary conditions are \emph{relaxed} in the reduced equation. Once the reduced equation is determined, the loss function can be constructed without considering the original boundary conditions. By substituting the trial function $\hat{y}(x) = G(x) + w(x)\cdot N(x)$ into \cref{eq:general-DE}, the original problem is converted to its reduced equation: \begin{equation} \label{eq:reduced-general} \mathcal{L}[G + w\cdot N] = \tilde{\mathcal{L}}[N] = f \quad N \in C(\mathbb{R}^n). \end{equation} One may think that there could be \emph{multiple} solutions to reduced equation since it has no boundary conditions. However, while the original problem \cref{eq:general-DE} has unique solution $y^*$, the reduced one \cref{eq:reduced-general} should also have a \emph{unique} solution $(y^*-G)/w$. The paradox indicates that, among all solutions to \cref{eq:reduced-general}, there exists a unique solution satisfies: $\mathcal{B} \tilde{N} \to 0$ while $x\to\partial\Omega$. We do not have a rigorous proof for this statement yet, but as supported by the examples shown later, a unique solution can always be obtained. After construction of trial function $\hat{y}$, the loss function towards which optimization is done is defined by residual $\mathcal{R}N = \tilde{\mathcal{L}}N-f$ of \cref{eq:general-DE}: \begin{equation} \label{eq:loss-general} L = \int_\Omega \left\langle(\mathcal{R}N)(x),(\mathcal{R}N)(x)\right\rangle \dd{x} = \sum_{x^*\in T(\Omega)} \Vert(\mathcal{R}N)(x^*)\Vert^2 , \end{equation} where $T(\Omega)$ is the training set containing points selected from domain $\Omega$. The operator $\mathcal{R}$ is defined by AD instead of manually working it out. This not only saves the researches from laborious job \cite{berg2017unified}, but also produces robust and reliable code \cite{baydin2015automatic,rall1981automatic}. There are successful AD implements on nearly all programming platforms \cite{baydin2015automatic}. In this work, reverse mode AD application programming interface on TensorFlow \cite{tensorflow2015-whitepaper} is called. Since AD solves the problem of much too complicated differential problem, high order differential operator $\mathcal{L}$ are solved in this work without extra efforts. The final stage of the framework solving ODE/PDE is optimization during which loss function $L$ defined by \cref{eq:loss-general} is minimized with respect to its free parameters. In cases where MFN $N(x;\theta, \beta)$ is the reduced solution, its weights and biases $(\theta, \beta)$ are trained to minimize $L=L(\theta, \beta)$. In this work, the optimization is done by second order method L-BFGS \cite{Liu1989on}, instead of SGD, the most popular choice in building machine learning models \cite{lecun2015deep}. The second order method is not always robust in general machine learning problems, but it serves well in ODE/PDE solver according to our observation. All numerical examples presented in later section in this paper is trained by second order method L-BFGS which greatly improves training efficiency. The training process requires large amount of computational resource which used to be an obstacle in development of machine learning \cite{minsky1969perceptrons}. Parallelism and heterogeneous computing throw lights on the problem, and the model in this work is defined and trained on TensorFlow \cite{tensorflow2015-whitepaper}. \section{Examples and Discussion} The first example on ODE solving is a definite integral problem as illustration: \begin{equation} \label{eq:1d-integral} \begin{cases} y'(x) = \cos x \\ y(x=0) = y_0 = 1 \end{cases}. \end{equation} The analytical solution is simply integration of R.H.S. of \cref{eq:1d-integral}: $y(x)=1 + \sin x$. In order to find a numerical solution in domain $[0, 10]$. The trial function is defined as \begin{equation} \label{eq:1d-integral-model} \hat{y}(x;\theta,\beta) = y_0 e^{-x} + (1-e^{-x})N(x;\theta,\beta). \end{equation} It is easily verified that requirements for $G(x)$ and $\tilde{N}(x)$ are all satisfied. The network $N(x;\theta, \beta)$ is set as a four-layer network with $20$ neurons in each hidden layer. Loss function is defined as \begin{equation} \label{eq:1d-interal-loss} L = \int_0^{10} \Vert \hat{y}' - \cos x\Vert^2 \dd{x} = \sum_{i=1}^{1000} | \hat{y}'(x_i) - \cos x_i |^2, \end{equation} with $\{x_i\}_{i=1,2,\ldots,1000}$ are uniformly selected in interval $[0, 10]$. The loss function is minimized by L-BFGS method, the result is illustrated in \cref{fig:1d-integral}. \begin{figure}[hptb] \centering \subfloat[][Model $\hat y(x)$\label{fig:1d-integral-y}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-y.pdf} } \subfloat[][Error distribution of Model\label{fig:1d-integral-y-dev}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-y-dev.pdf} }\\ \subfloat[][Bulk term $w(x)\cdot N(x)$\label{fig:1d-integral-core}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-core.pdf} } \subfloat[][Error distribution of bulk term\label{fig:1d-integral-core-dev}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-core-dev.pdf} }\\ \subfloat[][Reduced solution $N(x)$\label{fig:1d-integral-N}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-N.pdf} } \subfloat[][Error distribution of reduced solution \label{fig:1d-integral-N-dev}]{% \includegraphics[width=0.45\textwidth]{1D-Integral-N-dev.pdf} } \caption{Results of definite integral problem in \cref{eq:1d-integral} with $\mathrm{Error} = 1.96\times10^{-4}$} \label{fig:1d-integral} \end{figure} The boundary condition is well maintained in this example. \Cref{fig:1d-integral-y,fig:1d-integral-core,fig:1d-integral-N} shows that the trained MFN provides a correct reduced solution, as well as a good match in bulk term and the whole model. However, error distributions shown in \cref{fig:1d-integral-y-dev,fig:1d-integral-core-dev,fig:1d-integral-N-dev} reveal that the reduced solution actually has the lowest accuracy, especially on domain boundary, the pre-defined constrains in $G(x)$ and $w(x)$ reduce error in model, as well as turn the training process into a standard non-constrained optimization problem. Another important property shown in \cref{fig:1d-integral-y-dev} is that the solution learned by CMFN deviates from analytical solution randomly, where traditional numerical methods usually have increasing errors along iterations. This property is natural for CMFN method since all data points are equal to the learning process, while in iteration methods error grows accumulatively. A more accurate measure of error in solution provided by \cref{eq:1d-integral-model} is calculating the $L^2$-norm of error distribution: \begin{equation} \mathrm{Error} = \sqrt{\frac{1}{10} \int_0^{10} |y(x) - \hat{y}(x)|^2 \dd{x}.} \end{equation} The previous example has an average error of $1.96\times10^{-4}$. The authors also tried other choices on defining boundary term $G$ and weight function $w$: $G^*(x) \equiv 1$ and $w^*(x) = x$. Numerical test shows that definitions in \cref{eq:1d-integral-model} is more optimized than their alternatives. $G^*(x)$ instead of original $G(x)$ in \cref{eq:1d-integral-model} roughly doubles the average error, and $w^*(x)$ instead the original weight even reduces the accuracy for one order of magnitude. As is mentioned above, the reduced equation with relaxed boundary condition should have unique solution instead of multiple solution under the premise that the solution is bounded. The reduced equation for \cref{eq:1d-integral} is \begin{equation} \label{eq:1d-reduced} (1-e^{-x}) N' + e^{-x} N - y_0e^{-x} - \cos x = 0, \end{equation} and the general solution to \cref{eq:1d-reduced} is \begin{equation} \label{eq:1d-reduced-solution} N(x) = \frac{C - y_0e^{-x}}{1-e^{-x}} + \frac{\sin x}{1-e^{-x}} \quad C\in\mathbb{R}. \end{equation} At $x=1$, the first part in R.H.S. of \cref{eq:1d-reduced-solution} is unbounded if $C\neq y_0$; if $N(x)$ is assumed to be a bounded function on $[0, 1]$, then in \cref{eq:1d-reduced-solution} there exists unique solution which is a proper reduced solution to problem in \cref{eq:1d-integral}. The CMFN treats the problems defined by \cref{eq:general-DE} equally, so any initial condition problem is solved with similar process and accuracy. The following presents solution of boundary value problem (BVP) of a second order ODE\@. It is boundary layer problem \cite{prandtl1938zur} reduced to 1D\@: \begin{equation} \label{eq:bl} u u' = \nu u'' \quad \quad u(0) = 1 \quad u(1) = 0. \end{equation} The problem has analytical solution: \begin{equation} \label{eq:bl-solution} u(x) = \frac{2C}{1+\exp\left(\frac{x-1}{\nu}\cdot C\right)} - C\quad C > 1, \end{equation} with $C$ as a constant determined by algebraic equation: \begin{equation*} 1-\frac{2}{1+C} = \exp\left(-\frac{C}{\nu}\right). \end{equation*} We have $C \approx 1.2$ when $\nu=0.5$, and $C$ tends to $1$ rapidly as $\nu$ decreases to $\nu=0$. The boundary term of model is constructed according to boundary conditions in \cref{eq:bl} where linear function $f(x)=1-x$ is sufficient in matching values of the two boundary points. The weight function is constructed by polynomial such that $w(0)=0$ and $w(1)=0$. Finally, the trial function is defined as \begin{equation} \label{eq:bl-model} \hat{u}(x;\theta,\beta) = (1-x) + x(1-x)\cdot N(x;\theta, \beta), \end{equation} and loss function is defined similar to \cref{eq:1d-integral-model}. The model is trained with $100$ data points uniformly distributed in $[0, 1]$. \begin{figure}[hpbt] \centering \subfloat[][Model $\hat{y}(x)$ \label{fig:bl-y-dev}]{% \includegraphics[width=0.45\textwidth]{1D-BL-05-poly-compare-y-all.pdf} } \subfloat[][Reduced solution $N(x)$ \label{fig:bl-N-dev}]{% \includegraphics[width=0.45\textwidth]{1D-BL-05-poly-compare-N-all.pdf} } \caption{Error distributions of BVP with average error $< 10^{-5}$} \label{fig:bl-dev} \end{figure} \begin{figure}[hptb] \centering \subfloat[][1st order derivative\label{fig:bl-diff1}]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff1.pdf} } \subfloat[][2nd order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff2.pdf} \label{fig:bl-diff2} } \subfloat[][3rd order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff3.pdf} \label{fig:bl-diff3} }\\ \subfloat[][4th order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff4.pdf} \label{fig:bl-diff4} } \subfloat[][5th order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff5.pdf} \label{fig:bl-diff5} } \subfloat[][6th order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff6.pdf} \label{fig:bl-diff6} }\\ \subfloat[][7th order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff7.pdf} \label{fig:bl-diff7} } \subfloat[][8th order derivative]{% \includegraphics[width=0.3\textwidth]{1D-BL-diff8.pdf} \label{fig:bl-diff8} } \caption{Differentiation of learned solution of \cref{eq:bl}} \end{figure} \Cref{fig:bl-dev} shows that BVP is solved with high numerical accuracy, and in \crefrange{fig:bl-diff1}{fig:bl-diff8} differential property of the learned solution is studied. CMFN solution to the problem is not only an accurate numerical approximation to the analytical solution, but its first to eighth order derivatives are also all accurate numerical approximations to their analytical counterpart. This is very hard to achieve with commonly used numerical methods enumerated in section~1. Another important observation is that once weight is defined by polynomials, there is a sole proper choice based on initial/boundary condition: for 1D Dirichlet boundary contion at $x=a$, there would be a factor $(x-a)$ in the weight function. For example, weight in \cref{eq:bl-model} should be defined as $w(x) = x(1-x)$, which is Hermite interpolation based on boundary condition: $w(1) = w(0) = 0$; if weight is defined as $w^*(x)=x^2(1-x)$ instead, there will be a huge reduction in accuracy. Since the term $x^2$ in $w^*(x)$ not only vanishes itself at $x=0$, but also has zero first order derivative there; as a result, the boundary term $G(x)$ unexpectedly dominates both function value and first order derivative at $x=0$. Two dimensional problem is tested with heat conduction problem (Laplace equation): \begin{equation} \label{eq:laplace} \nabla^2 T(x, y) = 0 \quad\quad x\in[0, 1] \quad y\in[0, 1] \end{equation} The boundary condition of Dirichlet problem is: \begin{equation} \label{eq:dirichlet-bd} T(0, y) = T(1, y) = T(x, 0) = 0 \quad T(x, 1) = \sin \pi x. \end{equation} The problem has analytical solution: \begin{equation} \label{eq:heat-solution} T(x, y) = \frac{\sin\pi x\sinh\pi y}{\sinh\pi}, \end{equation} so the error of numerical solution $\hat{T}(x, y)$ is evaluated as: \begin{equation} \label{eq:heat-error} \mathrm{Error} = \sqrt{\frac{1}{S_\Omega}\int_\Omega |T(x,y) - \hat{T}(x, y)|^2 \dd{\Omega}}, \end{equation} with $S_\Omega$ being area of domain ($S_\Omega = 1$). The model for Dirichlet problem is \begin{equation} \label{eq:heat-dirichlet-model} \hat{T}(x;\theta, \beta) = y\sin\pi x + x(1-x)y(1-y)\cdot N(x;\theta, \beta). \end{equation} It is easily verified that the requirements for $G$ and $\tilde{N}$ are satisfied. The weight function is actually constructed by the principle discussed above: it consists of factors from all four boundary conditions. The loss function is constructed similar to \cref{eq:1d-interal-loss}, and the simulation is done by a MFN with $2$ hidden layers and $40$ neurons in each hidden layer. The data points training the network is a set of $900$ items which are vertices of a $30\times30$ uniform mesh on two dimensional unit cube. The results of simulation is demonstrated in \cref{fig:heat}. \Cref{fig:heat-accurate} is contour of analytical solution to \cref{eq:heat-solution}, and the simulated solution in \cref{fig:heat-dirichlet} matches the analytical solution quite exactly. \Cref{fig:heat-error} illustrates the pointwise deviation from analytical solution of bulk term. The average error in \cref{fig:heat-dirichlet} is $2.8\times10^{-6}$. The similar case calculated by penalty method reported in \cite{raissi2017physics,liu2019neural} is only of order $10^{-3}$. \begin{figure}[hptb] \centering \subfloat[][Analytical solution \label{fig:heat-accurate} ]{% \includegraphics[width=0.45\textwidth]{heat-accurate.pdf}} \subfloat[][Numerical Solution \label{fig:heat-dirichlet}]{% \includegraphics[width=0.45\textwidth]{heat-Dirichlet-simulated.pdf}}\\ \subfloat[][Deviation of bulk term \label{fig:heat-error}]{% \includegraphics[width=0.45\textwidth]{heat-coreMFN-dev.pdf}} \caption{The results for simulating Laplace equation} \label{fig:heat} \end{figure} The two properties distinguish CMFN from other methods are its generality and accuracy. As for the generality side, this framework is not sensitive to the property of differential equation; it works similarly on elliptic, hyperbolic, and parabolic problems. One interesting verification is that turning the original problem into a convection-diffusion problem: \begin{equation} \label{eq:c-d} u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}=\nabla^2 T+f, \end{equation} the boundary condition is set the same as in \cref{eq:dirichlet-bd}, as a Dirichlet problem, and all other configurations such as network topology and training set are all the same as previous. The convection velocities $(u, v)$ and source term $f$ are assigned artificially as: \begin{gather*} u(x, y) = y^2 \cos x, \\ v(x, y) = \frac13 y^3 \sin x, \\ f(x,y) = y^2\cos x \frac{\pi \cos \pi x \sinh \pi y}{\sinh \pi } + \frac13 y^3\sin x \frac{\pi \sin \pi x \cosh \pi y}{\sinh \pi } + \\ \alpha\left(2\pi^2\sin2\pi x\cos2\pi y-4\pi^2\sin2\pi x\sin^2\pi y\right) -\\ \alpha \left(2\pi y^2\cos x \cos2\pi x \sin^2\pi y + \frac\pi3 y^3 \sin x \sin2\pi x \sin2\pi y \right)\\ \alpha = 0.1. \end{gather*} This setup ensures that the analytical solution is: \begin{equation} \label{eq:cd-solution} T(x,y) =\frac{\sin\pi x\sinh\pi y}{\sinh\pi} - \alpha \sin 2\pi x \sin^2 \pi y \end{equation} The simulated solution is shown in \cref{fig:heat-cd}, its average error is $8.57\times10^{-4}$, while the elliptic counterpart of the problem has average error of $2.6\times10^{-5}$. \begin{figure} \centering \subfloat[][Numerical solution \label{fig:heat-cd-num}]{% \includegraphics[width=0.45\textwidth]{heat-CD-numerical.pdf}} \subfloat[][Analytical solution \label{fig:heat-cd-exact}]{% \includegraphics[width=0.45\textwidth]{heat-CD.pdf}} \caption{Convection-diffusion problem with average error of $8.57\times10^{-4}$} \label{fig:heat-cd} \end{figure} Another important property of CMFN is its accuracy. While solving Laplace equation, penalty method such as PINN \cite{liu2019neural} is only able to achieve an average error of $1.6\times10^{-3}$, and it is easily observed that boundary conditions are not accurately satisfied (especially in the two lower corners of \cref{fig:heat-pinn}), but CMFN keeps the boundary conditions being satisfied accurately in an intrinsic way. \begin{figure} \centering \subfloat[][Numerical solution \label{fig:heat-pinn-solution}]{% \includegraphics[width=0.45\textwidth]{heat-PINN.pdf}} \subfloat[][Error distribution \label{fig:heat-pinn-deviation}]{% \includegraphics[width=0.45\textwidth]{heat-PINN-dev.pdf}} \caption{Numerical solution by PINN method with average error of $1.6\times10^{-3}$} \label{fig:heat-pinn} \end{figure} \section{Conclusion and Future Work} In this paper, we present a novel framework of constructing ODE/PDE solver based on CMFN method. The numerical method and its application are discussed with regard to ODEs and PDEs with Dirichlet boundary conditions. CMFN method stands out for its generality and accuracy. Traditional neural network methods based on RBF \cite{mai2001numerical} or penalty methods \cite{raissi2017physics} have very limited accuracy. By constructing the trial function with a weighted reduced solution as bulk term and a pre-defined boundary term, the model satisfies the boundary condition automatically, and as a result, the training based on residuals of differential equations could be more effective. Moreover, the CMFN method trains the neural network with all input data simultaneously, so it is intrinsically able to remain accuracy in numerically solving the differential equation on large domain and large time span. The iteration methods such as FVM and FDM accumulate truncation error in each step, so the scheme has to be designed carefully to be applied to larger domain and larger time span, while methods based on neural network does not has the obsession. The generality of CMFN framework is in several aspects. The iteration methods such as FDM, FVM, and FEM are sensitive to property of PDE, since the growth of numerical error differs in hyperbolic, parabolic, and elliptic problems. CMFN instead provides a unified method. Compared with traditional neural network methods based on RBF~\cite{mai2001numerical}, CMFN can be applied similarly on both linear and nonlinear problems while the later usually only works on linear problem. In this work, very simple network topology (four-layer feedforward network with twenty neurons) and very small data (less than $10^3$ points) are used, but heat transfer equation and convection-diffusion equation with Dirichlet boundary condition on unit cube are solved successfully on the same model. Another property of CMFN method being worth mentioning is the indeterminacy in numerical result. In the training stage of our new framework, there is an initial guess on MFN\@. Since the network has tremendous parameters, it has to be randomly initialized; after training, the loss function would be reduced to a small number, but usually not zero, so the optimal solution is not usually obtained. The above two factors lead to the result that each specific parameter of MFN has rather random behavior. However the overall behavior of the computational machine is controllable, because as long as the object function has enough continuity, the error would reduce along with reduction of loss function. This work is a new starting point in the field of constructing PDE solver for the authors. There are several works could be considered in the future: \begin{enumerate} \item finding a general method to construct proper form of bulk term for Neumann boundary condition; \item finding a systematical method of constructing weight and boundary term, especially for complex geometry; \item building larger and deeper networks for more complex problems such as Navier-Stokes equation; and \item giving out a more mathematically rigorous proof on existence and uniqueness of reduced solution. \end{enumerate}
{ "timestamp": "2019-04-16T02:14:04", "yymm": "1904", "arxiv_id": "1904.06619", "language": "en", "url": "https://arxiv.org/abs/1904.06619" }
\section{Introduction} \label{sec:intro} \IEEEPARstart{D}{ynamic} networks are seemingly ubiquitous in the real-world. Such networks evolve over time with the addition, deletion, and changing of nodes and links. The temporal information in these networks is known to be important to accurately model, predict, and understand network data~\cite{watts1998collective,newman2001structure}. Despite the importance of these dynamics, most previous work on embedding methods have ignored the temporal information in network data~\cite{deepwalk,node2vec,line,grarep,deepGL,struc2vec,ASNE,ahmed17learning-attr-graphs,ComE,lee17-Deep-Graph-Attention}. \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[t!] \centering \begin{center} \subfigure[Graph (edge) stream]{ \scalebox{0.45}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, invis node/.style={circle,draw=white,fill=white,draw,font=\sffamily\Large\bfseries,text=black}] \node[main node] (2) at (0,0) {$\mathbf{v_2}$}; \node[main node] (1) [below of=2] {$\mathbf{v_1}$}; \node[main node] (3) [right of=2] at (-1,0) {$\mathbf{v_3}$}; \node[main node] (22) [below of=3]{$\mathbf{v_2}$}; \node[main node] (4) [right of=3] at (0.5,0) {$\mathbf{v_4}$}; \node[main node] (33) [below of=4]{$\mathbf{v_3}$}; \node[main node] (11) [right of=4] at (2,0) {$\mathbf{v_1}$}; \node[main node] (44) [below of=11]{$\mathbf{v_4}$}; \node[main node] (444) [right of=11] at (3.5,0) {$\mathbf{v_4}$}; \node[main node] (333) [below of=444]{$\mathbf{v_3}$}; \node[main node] (3333) [right of=444] at (5,0) {$\mathbf{v_3}$}; \node[main node] (5) [below of=3333]{$\mathbf{v_5}$}; \node[main node] (55) [right of=3333] at (6.5,0) {$\mathbf{v_5}$}; \node[main node] (222) [below of=55]{$\mathbf{v_2}$}; \node[main node] (33333) [right of=55] at (8,0) {$\mathbf{v_3}$}; \node[main node] (6) [below of=33333]{$\mathbf{v_6}$}; \node[invis node] (0) [right of=33333] at (9.3,0) {$\mathbf{}$}; \node[invis node] (00) [below of=0]{$\mathbf{}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [anchor=center, left] {1} (2) (22) edge [left] node [anchor=center, left] {2} (3) (33) edge [left] node [anchor=center, left] {3} (4) (44) edge [left] node [anchor=center, left] {4} (11) (333) edge [left] node [anchor=center, left] {5} (444) (5) edge [left] node [anchor=center, left] {7} (3333) (222) edge [left] node [anchor=center, left] {8} (55) (6) edge [left] node [anchor=center, left] {10} (33333) (00) edge [thick,line width=0mm,draw=white,left] node [anchor=center, left] {\Large \bf $\cdots$} (0); \end{tikzpicture} } } \subfigure[Continuous-Time Dynamic Network (CTDN)]{ \scalebox{0.5}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, white node/.style={circle,draw=white,fill=white,text=white,draw,font=\sffamily\Large\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \node[white node] (7) [left of=1] {$\mathbf{---}$}; \node[white node] (8) [right of=5] {$\mathbf{---}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {1} (3) (2) edge [right] node[below right] {3,5} (4) (4) edge [left] node[below left] {4} (1) (3) edge[bend left] node[sloped,anchor=center,above] {8} (5) (5) edge node[anchor=center,above] {7} (2) (6) edge node[sloped,anchor=center,below] {10} (2) (3) edge [right] node[above right] {2} (2); \end{tikzpicture} } } \end{center} \caption{ Dynamic network. Edges are labeled by time. Observe that existing methods that ignore time would consider $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ a \emph{valid} walk, however, $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ is clearly \emph{invalid with respect to time} since $v_1 \!\! \longrightarrow \! v_2$ exists in the past with respect to $v_4 \!\! \longrightarrow \! v_1$. In this work, we propose the notion of \emph{temporal random walks} for embeddings that capture the \emph{true temporally valid} behavior in networks. In addition, our approach naturally supports learning in \emph{graph streams} where edges arrive continuously over time (\emph{e.g.}, every second/millisecond) } \label{fig:info-loss-example} \end{figure} In this work, we address the problem of learning dynamic node embeddings directly from edge streams (\emph{i.e.}, \emph{continuous-time dynamic networks}) consisting of a sequence of timestamped edges at the finest temporal granularity for improving the accuracy of predictive models. We propose \emph{continuous-time dynamic network embeddings} (CTDNE) and describe a general framework for learning such embeddings based on the notion of \emph{temporal random walks} (walks that respect time). The framework is general with many interchangeable components and can be used in a straightforward fashion for incorporating temporal dependencies into existing node embedding and deep graph models that use random walks. Most importantly, the CTDNEs are learned from temporal random walks that represent actual \emph{temporally valid sequences} of node interactions and thus avoids the issues and information loss that arises when time is ignored~\cite{deepwalk,node2vec,line,grarep,deepGL,struc2vec,ASNE,ahmed17learning-attr-graphs,ComE,lee17-Deep-Graph-Attention} or approximated as a sequence of discrete static snapshot graphs~\cite{rossi2013dbmm-wsdm,hisano2016semi,kamra2017dgdmn,saha2018models,rahman2018dylink2vec} (Figure~\ref{fig:info-discrete-time-model-loss-example}) as done in previous work. The proposed approach (1) obeys the direction of time and (2) biases the random walks towards edges (and nodes) that are more recent and more frequent. The result is a more appropriate time-dependent network representation that captures the important temporal properties of the continuous-time dynamic network at the finest most natural temporal granularity without loss of information while using walks that are temporally valid (as opposed to walks that do not obey time and thus are invalid and noisy as they represent sequences that are impossible with respect to time). Hence, the framework allows existing embedding methods to be easily adapted for learning more appropriate network representations from continuous-time dynamic networks by ensuring time is respected and avoiding impossible sequences of events. The proposed framework learns more appropriate dynamic node embeddings directly from a stream of timestamped edges at the finest temporal granularity. In particular, this work proposes the use of temporal walks as a basis to learn temporally valid node embeddings that capture the important temporal dependencies of the network at the finest most natural granularity (\emph{e.g.}, at a time scale of seconds or milliseconds). This is in contrast to approximating the dynamic network as a sequence of static snapshot graphs $G_1,\ldots,G_t$ where each static snapshot graph represents all edges that occur between a user-specified discrete-time interval (\emph{e.g.}, day or week)~\cite{rossi2012dynamic-srl,soundarajan2016generating,sun2007graphscope}. Besides the obvious loss of information, there are many other issues such as selecting an appropriate aggregation granularity which is known to be an important and challenging problem in itself that can lead to poor predictive performance or misleading results. In addition, our approach naturally supports learning in \emph{graph streams} where edges arrive continuously over time (\emph{e.g.}, every second/millisecond)~\cite{aggarwal2011outlier,ahmed17streams,aggarwal2010dense,guha2012graph} and therefore can be used for a variety of applications requring real-time performance~\cite{pienta2015scalable,cai2012facilitating,ahmed2015interactive}. We demonstrate the effectiveness of the proposed framework and generalized dynamic network embedding method for temporal link prediction in several real-world networks from a variety of application domains. Overall, the proposed method achieves an average gain of $11.9\%$ across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations. In addition, any existing embedding method or deep graph model that use random walks can benefit from the proposed framework (\emph{e.g.},~\cite{deepwalk,node2vec,struc2vec,ComE,ASNE,dong2017metapath2vec,ahmed17learning-attr-graphs,lee17-Deep-Graph-Attention}) as it serves as a basis for incorporating important temporal dependencies into existing methods. Methods generalized by the framework are able to learn more meaningful and accurate time-dependent network embeddings that capture important properties from continuous-time dynamic networks. Previous embedding methods and deep graph models that use random walks search over the space of random walks $\mathbb{S}$ on $G$, whereas the class of models (continuous-time dynamic network embeddings) proposed in this work learn temporal embeddings by searching over the space $\mathbb{S}_{T}$ of temporal random walks that obey time and thus $\mathbb{S}_{T}$ includes only \emph{temporally valid walks}. See Figure~\ref{fig:space-of-random-walks} for intuition. Informally, a \emph{temporal walk} $S_t$ from node $v_{i_{1}}$ to node $v_{i_{L+1}}$ is defined as a sequence of edges $\lbrace(v_{i_{1}}, v_{i_{2}}, t_{i_{1}})$, $(v_{i_{2}},v_{i_{3}}, t_{i_{2}}), \ldots, (v_{i_{L}},$ $v_{i_{L+1}}, t_{i_{L}})\rbrace$ such that $t_{i_{1}} \leq t_{i_{2}} \leq \ldots \leq t_{i_{L}}$. A temporal walk represents a \emph{temporally valid} sequence of edges traversed in increasing order of edge times and therefore respect time. For instance, suppose each edge represents a contact (\emph{e.g.}, email, phone call, proximity) between two entities, then a temporal random walk represents a feasible route for a piece of information through the dynamic network. It is straightforward to see that existing methods that ignore time learn embeddings from a set of random walks that are not actually possible when time is respected and thus represent invalid sequences of events. There is only a small overlap between $\mathbb{S}_T$ and $\mathbb{S}_D$ as shown in Figure~\ref{fig:space-of-random-walks} since only a small fraction of the space of walks in $\mathbb{S}_D$ are actually time-respecting (valid temporal walks). \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[b!] \vspace{-5mm} \centering \begin{center} \subfigure[Static graph ignoring time]{ \label{fig:static-graph-example} \scalebox{0.46}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {} (3) (2) edge [left] node[below right] {} (4) (4) edge [left] node[below left] {} (1) (3) edge[bend left] node[sloped,anchor=center,above] {} (5) (5) edge node[anchor=center,above] {} (2) (6) edge node[sloped,anchor=center,below] {} (2) (3) edge [right] node[above right] {} (2); \end{tikzpicture} } } \tikzstyle{background-page}=[rectangle, fill=gray!25, inner sep=0.5cm, rounded corners=5mm] \subfigure[Discrete-Time Dynamic Network (DTDN)] {\label{fig:DTND-example} \begin{minipage}[t]{0.43\linewidth} \scalebox{0.42}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\LARGE\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {} (3) (2) edge [right] node[below right] {} (4) (4) edge [left] node[below left] {} (1) (3) edge [right] node[above right] {} (2); \begin{pgfonlayer}{background} \node [background-page, fit=(3) (1) (4) (2) (5) (6), label=below:\fontsize{18}{20}\selectfont $G_1$ ] {}; \end{pgfonlayer} \end{tikzpicture} } \end{minipage} \hspace{2mm} \begin{minipage}[t]{0.43\linewidth} \scalebox{0.42}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\LARGE\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge[bend left] node[sloped,anchor=center,above] {} (5) (5) edge node[anchor=center,above] {} (2) (6) edge node[sloped,anchor=center,below] {} (2); \begin{pgfonlayer}{background} \node [background-page, fit=(3) (1) (4) (2) (5) (6), label=below:\fontsize{18}{20}\selectfont $G_2$] {}; \end{pgfonlayer} \end{tikzpicture} } \end{minipage} } \end{center} \vspace{-4mm} \caption{Representing the continuous-time dynamic network as a static graph or discrete-time dynamic network (DTDN). Noise and information loss occurs when the true dynamic network (Figure~\ref{fig:info-loss-example}) is approximated as a sequence of discrete static snapshot graphs $G_1,\ldots,G_t$ using a user-defined aggregation time-scale $s$ (temporal granularity). Suppose the dynamic network in Figure~\ref{fig:info-loss-example} is used and $s=5$, then $G_1$ includes all edges in the time-interval $[1,5]$ whereas $G_2$ includes all edges in $[6,10]$ and so on. Notice that in the static snapshot graph $G_1$ the walk $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ is still possible despite it being \emph{invalid} while the perfectly valid temporal walk $v_1 \!\! \longrightarrow \! v_2 \!\! \longrightarrow \! v_5$ is impossible. Both cases are captured correctly without any loss using the notion of \emph{temporal walk} on the actual dynamic network. } \label{fig:info-discrete-time-model-loss-example} \end{figure} The sequence that links (events) occur in a network carries important information, \emph{e.g.}, if the event (link) represents an email communication sent from one user to another, then the state of the user who receives the email message changes in response to the email communication. For instance, suppose we have two emails $e_i = (v_1,v_2)$ from $v_1$ to $v_2$ and $e_j=(v_2,v_3)$ from $v_2$ to $v_3$; and let $\mathcal{T}(v_1,v_2)$ be the time of an email $e_i = (v_1,v_2)$. If $\mathcal{T}(v_1,v_2) < \mathcal{T}(v_2,v_3)$ then the message $e_j = (v_2,v_3)$ may reflect the information received from the email communication $e_i=(v_1,v_2)$. However, if $\mathcal{T}(v_1,v_2) > \mathcal{T}(v_2,v_3)$ then the message $e_j = (v_2,v_3)$ cannot contain any information communicated in the email $e_i=(v_1,v_2)$. This is just one simple example illustrating the importance of modeling the actual sequence of events (email communications). Embedding methods that ignore time are prone to many issues such as learning inappropriate node embeddings that do not accurately capture the dynamics in the network such as the real-world interactions or flow of information among nodes. An example of information loss that occurs when time is ignored or the actual dynamic network is approximated using a sequence of discrete static snapshot graphs is shown in Figure~\ref{fig:info-loss-example} and~\ref{fig:info-discrete-time-model-loss-example}, respectively. This is true for networks that involve the flow or diffusion of information through a network~\cite{lerman2010information,acemoglu2010spread,rossi2012dpr-dynamical}, networks modeling the spread of disease/infection~\cite{infect}, spread of influence in social networks (with applications to product adoption, viral marketing)~\cite{java2006modeling,domingos2005mining}, or more generally any type of dynamical system or diffusion process over a network~\cite{lerman2010information,acemoglu2010spread,rossi2012dpr-dynamical}. The proposed approach naturally supports generating dynamic node embeddings for any pair of nodes at a specific time $t$. More specifically, given a newly arrived edge between node $i$ and $j$ at time $t$, we simply add the edge to the graph, perform a number of temporal random walks that contain those nodes, and then update the embedding vectors for those nodes (via a partial fast update) using only those walks. In this case, there is obviously no need to recompute the embedding vectors for all such nodes in the graph as the update is very minor and an online partial update can be performed fast. This includes the case where either node in the new edge has never been seen previously. The above is a special case of our framework and is a trivial modification. Notice that we can also obviously drop-out past edges as they may become stale. \medskip\noindent\textbf{Summary of Main Contributions:} This work makes three main contributions. First, we described a new class of embeddings based on the notion of \emph{temporal walks}. This notion can be used in a straightforward fashion to adapt other existing and/or future state-of-the-art methods for learning embeddings from temporal networks (graph streams). Second, unlike previous work that learn embeddings using an approximation of the actual dynamic network (\emph{i.e.}, sequence of static graphs), we describe a new class of embeddings called \emph{continuous-time dynamic network embeddings} (CTDNE) that are learned directly from the graph stream. CTDNEs avoid the issues and information loss that arise when time is ignored or the dynamic network (graph stream) is approximated as a sequence of discrete static snapshot graphs. This new class of embeddings leverage the notion of \emph{temporal walks} that captures the \emph{temporally valid interactions} (\emph{e.g.}, flow of information, spread of diseases) in the dynamic network (graph stream) in a lossless fashion. As an aside, since these embeddings are learned directly from the graph stream at the finest granularity, they can also be learned in an online fashion, \emph{i.e.}, node embeddings are updated after every new edge (or batch of edges). Finally, we describe a framework for learning them based on the notion of \emph{temporal walks}. The proposed framework provides a basis for generalizing existing (or future state-of-the-art) embedding methods that use the traditional notion of random walks over static or discrete approximation of the actual dynamic network. \newcommand{\subsec}[1]{\medskip\noindent\textbf{#1:}\;} \section{Related work} \label{sec:related-work} \noindent \subsec{Representation Learning in Static Networks} The node embedding problem has received considerable attention from the research community in recent years.\footnote{In the time between our shorter CTDNE paper from early 2018~\cite{CTDNE} and this papers original submission, there have been a number of closely related follow-up works. For temporal clarity, these works are not reviewed or compared against in detail.} See~\cite{rossi12jair} for an early survey on representation learning in relational/graph data. The goal is to learn encodings (embeddings, representations, features) that capture key properties about each node such as their role in the graph based on their structural characteristics (\emph{i.e.}, roles capture distinct structural properties, \emph{e.g.}, hub nodes, bridge nodes, near-cliques)~\cite{rossi2014roles} or community (\emph{i.e.}, communities represent groups of nodes that are close together in the graph based on proximity, cohesive/tightly connected nodes)~\cite{ng2002spectral,pons2006computing}. Since nodes that share similar roles (based on structural properties) or communities (based on proximity, cohesiveness) are grouped close to each other in the embedding space, one can easily use the learned embeddings for tasks such as ranking~\cite{page1998pagerank}, community detection~\cite{ng2002spectral,pons2006computing}, role embeddings~\cite{rossi2014roles,ahmed2017edgeroles}, link prediction~\cite{liu2010link}, and node classification~\cite{rossi2012dynamic-srl}. Many of the techniques that were initially proposed for solving the node embedding problem were based on graph factorization~\cite{ahmedWWW13,Belkin02laplacianeigenmaps,grarep}. More recently, the skip-gram model~\cite{skipgram-old} was introduced in the natural language processing domain to learn vector representations for words. Inspired by skip-gram's success in language modeling, various methods~\cite{deepwalk,node2vec,line} have been proposed to learn node embeddings using skip-gram by treating a graph as a ``document." Two of the more notable methods, DeepWalk~\cite{deepwalk} and node2vec~\cite{node2vec}, use random walks to sample an ordered sequence of nodes from a graph. The skip-gram model can then be applied to these sequences to learn node embeddings. \subsec{Representation Learning in Dynamic Networks} Researchers have also tackled the problem of node embedding in more complex graphs, including attributed networks~\cite{ASNE}, heterogeneous networks~\cite{dong2017metapath2vec} and dynamic networks~\cite{rossi2013dbmm-wsdm,zhou2018dynamic,li2017attributed}. However, the majority of the work in the area still fail to consider graphs that evolve over time (\emph{i.e.} temporal graphs). A few work have begun to explore the problem of learning node embeddings from temporal networks~\cite{rossi2013dbmm-wsdm,hisano2016semi,kamra2017dgdmn, zhu2016scalable,saha2018models,rahman2018dylink2vec}. All of these approaches \emph{approximate} the dynamic network as a sequence of discrete static snapshot graphs, which are fundamentally different from the class of continuous-time dynamic network embedding methods introduced in this work. Notably, this work is the first to propose \emph{temporal random walks} for embeddings as well as \emph{CTDN embeddings} that use temporal walks to capture the actual temporally valid sequences observed in the CTDN; and thus avoids the issues and information loss that arises when embedding methods simply ignore time or use discrete static snapshot graphs (See Figure~\ref{fig:info-discrete-time-model-loss-example} for one example). Furthermore, we introduce a unifying framework that can serve as a basis for generalizing other random walk based deep learning (\emph{e.g.},~\cite{lee17-Deep-Graph-Attention}) and embedding methods (\emph{e.g.},~\cite{struc2vec,node2vec,ComE,ASNE,dong2017metapath2vec,hamilton2017inductive}) for learning more appropriate time-dependent embeddings from temporal networks. In contrast, previous work has simply introduced new approaches for temporal networks~\cite{hisano2016semi} and therefore they focus on an entirely different problem than the one in this work which is a general framework that can be leveraged by other non-temporal approaches. Temporal graph smoothing of a sequence discrete static snapshot graphs was proposed for classification in dynamic networks~\cite{rossi2012dynamic-srl}. The same approach has also been used for deriving role-based embeddings from dynamic networks~\cite{rossi2012role-www,rossi2013dbmm-wsdm}. More recently, these techniques have been used to derive more meaningful embeddings from a sequence of discrete static snapshot graphs~\cite{bonner2018temporal,singer2019node,saha2018models,rahman2018dylink2vec}. All of these approaches model the dynamic network as a sequence of discrete static snapshot graphs, which is fundamentally different from the class of continuous-time dynamic network embedding methods introduced in this work. Table~\ref{table:qual-comp} provides a qualitative comparison of CTDNE methods to existing static methods or DTDNE methods that approximate the dynamic network as a discrete sequence of static snapshot graphs. \begin{table}[t!] \centering \renewcommand{\arraystretch}{1.10} \caption{Comparison of Different Classes of Embedding Methods} \label{table:qual-comp} \vspace{-2.5mm} \footnotesize \setlength{\tabcolsep}{2.9pt} \begin{tabularx}{1.0\linewidth}{l@{} cc ccc cHH @{}} \multicolumn{8}{@{}p{1.0\linewidth}}{ \scriptsize Comparison of CTDNE methods to existing methods categorized as either static methods (that ignore time) or DTDNE methods that approximate the actual dynamic network using a sequence of discrete static snapshot graphs. Does the method: use the actual dynamic network at the finest temporal granularity, \emph{e.g.}, seconds or ms (or do they use discrete static approximations of the dynamic network); temporally valid; use temporal bias/smoothing functions to give more importance to recent or temporally recurring information; and does it naturally support graph streams and the streaming/online setting in general where data is continuously arriving over time and embeddings can be incrementally updated in an online fashion. }\\ \toprule & {\footnotesize \bf Temporally } & & {\footnotesize \bf Finest} & {\footnotesize \bf Temporal} &&& \\ & {\footnotesize \bf valid} & & {\footnotesize \bf granularity} & {\footnotesize \bf bias/smoothing} && {\footnotesize \bf Streaming} && \\ \midrule \textsf{Static} & \ding{55} & & \ding{55} & \ding{55} && \ding{55} & \\ \textsf{DTDNE} & \ding{55} & & \ding{55} & \ding{51} && \ding{55} & \\ \textsf{CTDNE} & \ding{51} & & \ding{51} & \ding{51} && \ding{51} & \\ \bottomrule \end{tabularx} \vspace{-2mm} \end{table} \subsec{Temporal Networks} More recently, there has been significant research in developing network analysis and machine learning methods for modeling temporal networks. Temporal networks have been the focus of recent research including node classification in temporal networks~\cite{rossi2012dynamic-srl}, temporal link prediction~\cite{dunlavy2011temporal}, dynamic community detection~\cite{cazabet2014dynamic}, dynamic mixed-membership role models~\cite{fu2009dynamic,rossi2012role-www,rossi2013dbmm-wsdm}, anomaly detection in dynamic networks~\cite{ranshous2015dynamic-net-anomaly-survey}, influence modeling and online advertisement~\cite{goyal2010learning}, finding important entities in dynamic networks~\cite{rossi2012dpr-dynamical,OMadadhain2005}, and temporal network centrality and measures~\cite{holme2012temporal,beres2018temporal}. \subsec{Random Walks} Random walks on graphs have been studied for decades~\cite{lovasz1993random}. The theory underlying random walks and their connection to eigenvalues and other fundamental properties of graphs are well-understood~\cite{chung2007random}. Our work is also related to uniform and non-uniform random walks on graphs~\cite{lovasz1993random,chung2007random}. Random walks are at the heart of many important applications such as ranking~\cite{page1998pagerank}, community detection~\cite{ng2002spectral,pons2006computing}, recommendation~\cite{bogers2010movie}, link prediction~\cite{liu2010link}, and influence modeling~\cite{java2006modeling}. search engines~\cite{lassez:latentlinks}, image segmentation~\cite{grady2006random}, routing in wireless sensor networks~\cite{servetto2002constrained}, and time-series forecasting~\cite{rossi2012dpr-dynamical}. These applications and techniques may also benefit from the proposed class of embeddings that are based on the notion of \emph{temporal random walks}. Recently, Ahmed~\emph{et al.}\xspace~\cite{ahmed17attrRandomWalks} proposed the notion of \emph{attributed random walks} that can be used to generalize existing methods for inductive learning and/or graph-based transfer learning tasks. In future work, we will investigate combining both attributed random walks and temporal random walks~\cite{tremblay2001temporal} to derive even more powerful embeddings. \section{Continuous-Time Dynamic Embeddings} \label{sec:streaming-network-embeddings} \noindent While previous work uses discrete approximations of the dynamic network (\emph{i.e.}, a sequence of discrete static snapshot graphs), this paper proposes an entirely new direction that instead focuses on learning embeddings directly from the graph stream using only temporally valid information. In this work, instead of approximating the dynamic network as a sequence of discrete static snapshot graphs defined as $G_1, \ldots, G_T$ where $G_i=(V, E_t)$ and $E_t$ are the edges active between the timespan $[t_{i-1},t_i]$, we model the \emph{temporal interactions} in a lossless fashion as a \emph{continuous-time dynamic network} (CTDN) defined formally as: \begin{Definition}[\sc Continuous-Time Dynamic Network] \label{eq:cont-time-dynamic-network} Given a graph $G=(V,E_T,\mathcal{T})$, let $V$ be a set of vertices, and $E_T \subseteq V \times V \times \RR^{+}$ be a set of temporal edges between vertices in $V$, and $\mathcal{T} : E \rightarrow \RR^{+}$ is a function that maps each edge to a corresponding timestamp. At the finest granularity, each edge $e_i = (u,v,t) \in E_T$ may be assigned a unique time $t \in \RR^{+}$. \end{Definition}\noindent In continuous-time dynamic networks (\emph{i.e.}, temporal networks, graph streams)~\cite{holme2012temporal}, edges occur over a time span $\mathcal{T} \subseteq \mathbb{T}$ where $\mathbb{T}$ is the temporal domain.\footnote{The terms temporal network, graph stream, and continuous-time dynamic network are used interchangeably.} For continuous-time systems $\mathbb{T}=\RR^{+}$. In such networks, a \emph{valid} walk is defined as a sequence of nodes connected by edges with non-decreasing timestamps~\cite{nicosia2013graph}. In other words, if each edge captures the time of contact between two entities, then a (valid temporal) walk may represent a feasible route for a piece of information. More formally, \begin{Definition}[\sc Temporal Walk]\label{def:temporal-walk} A temporal walk from $v_1$ to $v_k$ in $G$ is a sequence of vertices $\langle v_1, v_2, \cdots, v_k \rangle$ such that $\langle v_i, v_{i+1} \rangle \in E_T$ for $1 \leq i < k$, and $\mathcal{T}(v_i, v_{i+1}) \leq \mathcal{T}(v_{i+1}, v_{i+2})$ for $1 \leq i < (k-1)$. For two arbitrary vertices $u$, $v \in V$, we say that $u$ is \textit{temporally connected} to $v$ if there exists a temporal walk from $u$ to $v$. \end{Definition} \noindent The definition of temporal walk echoes the standard definition of a walk in static graphs but with an additional constraint that requires the walk to respect time, that is, edges must be traversed in increasing order of edge times. As such, temporal walks are naturally asymmetric~\cite{xuan2003computing,ferreira2007evaluation,tremblay2001temporal}. Modeling the dynamic network in a continuous fashion makes it completely trivial to add or remove edges and nodes. For instance, suppose we have a new edge $(v,u,t)$ at time $t$, then we can sample a small number of temporal walks ending in $(v,u)$ and perform a fast partial update to obtain the updated embeddings (See Section~\ref{sec:time-preserving-embeddings} for more details). This is another advantage to our approach compared to previous work that use discrete static snapshot graphs to approximate the dynamic network. Note that performing a temporal walk forward through time is equivalent to one backward through time. However, for the streaming case (online learning of the embeddings) where we receive an edge $(v,u,t)$ at time $t$, then we sample a temporal walk backward through time. A \emph{temporally invalid walk} is a walk that does not respect time. Any method that uses temporally invalid walks or approximates the dynamic network as a sequence of static snapshot graphs is said to have \emph{temporal loss}. \begin{figure}[h!] \vspace{0mm} \centering \begin{center} \scalebox{0.85}{ \begin{tikzpicture} \begin{scope}[blend group = soft light] \fill[gray!70] ( 90:1.5) circle (2); \fill[gray!50] (110:1.8) circle (0.8); \fill[gray!70] (75:1.5) circle (0.4); \end{scope} \node [font=\Large] {\fontsize{18}{20}\selectfont $\mathbb{S}$}; \node at ( 110:1.8) {\fontsize{15}{17}\selectfont $\mathbb{S}_D$}; \node at ( 75:1.5) {\fontsize{15}{17}\selectfont $\mathbb{S}_T$}; \end{tikzpicture} } \end{center} \vspace{-3mm} \caption{ Space of all possible random walks $\mathbb{S}$ (up to a fixed length $L$) including (i) the space of temporal (time-obeying) random walks denoted as $\mathbb{S}_T$ that capture the temporally valid flow of information (or disease, etc.) in the network without any loss and (ii) the space of random walks that are possible when the dynamic network is approximated as a sequence of discrete static snapshot graphs denoted as $\mathbb{S}_{D}$. Notably, there is a very small overlap between $\mathbb{S}_T$ and $\mathbb{S}_D$ since only a small fraction of the walks in $\mathbb{S}_D$ are actually time-respecting (valid temporal walks). } \label{fig:space-of-random-walks} \vspace{0mm} \end{figure} We define a new type of embedding for dynamic networks (graph streams) called continuous-time dynamic network embedding (CTDNEs). \begin{Definition}[\sc Continuous-Time Dynamic Network Embedding]\label{def:ctdne-problem} Given a dynamic network $G=(V,E_T,\mathcal{T})$ (graph stream), the goal is to learn a function $f : V \rightarrow \RR^{D}$ that maps nodes in the continuous-time dynamic network (graph stream) $G$ to $D$-dimensional time-dependent embeddings using only data that is temporally valid (\emph{e.g.}, temporal walks defined in Definition~\ref{def:temporal-walk}). \end{Definition}\noindent Unlike previous work that ignores time or \emph{approximates} the dynamic network as a sequence of discrete static snapshot graphs $G_1, \ldots, G_t$, CTDNEs proposed in this work are learned from temporal random walks that capture the true temporal interactions (\emph{e.g.}, flow of information, spread of diseases, etc.) in the dynamic network in a lossless fashion. CTDNEs (or simply dynamic node embeddings) can be learned incrementally or in a streaming fashion where embeddings are updated in real-time as new edges arrive. For this new class of dynamic node embeddings, we describe a general framework for learning such temporally valid embeddings from the graph stream in Section~\ref{sec:framework}. \section{Framework} \label{sec:framework} \noindent While Section~\ref{sec:streaming-network-embeddings} formally introduced the new class of embeddings investigated in this work, this section describes a general framework for deriving them based on the notion of $\text{\emph{temporal walks}}$. The framework has two main interchangeable components that can be used to \emph{temporally bias} the learning of the dynamic node embeddings. We describe each component in Section~\ref{sec:selection-of-start-time} and~\ref{sec:temporal-random-walk}. In particular, the CTDNE framework generates \emph{(un)biased temporal random walks} from CTDNs that are then used in Section~\ref{sec:time-preserving-embeddings} for deriving time-dependent embeddings that are learned from temporally valid node sequences that capture in a lossless fashion the actual flow of information or spread of disease in a network. It is straightforward to use the CTDNE framework for temporal networks where edges are active only for a specified time-period. \begin{figure}[h!] \vspace{-2mm} \centering \hspace{-4mm} \includegraphics[width=0.6\linewidth]{fig2} \caption{ Example initial edge selection cumulative probability distributions (CPDs) for each of the variants investigated (uniform, linear, and exponential). Observe that exponential biases the selection of the initial edge towards those occurring more recently than in the past, whereas linear lies between exponential and uniform. } \label{fig-initial-edge-selection-fb-forum} \end{figure} \subsection{Initial Temporal Edge Selection} \label{sec:selection-of-start-time} This section describes approaches to temporally bias the temporal random walks by selecting the initial temporal edge to begin the temporal random walk. In general, each temporal walk starts from a temporal edge $e_i \in E_T$ at time $t=\mathcal{T}$ selected from a distribution $\mathbb{F}_s$. The distribution used to select the initial temporal edge can either be uniform in which case there is no bias or the selection can be temporally biased using an arbitrary weighted (non-uniform) distribution for $\mathbb{F}_s$. For instance, to learn node embeddings for the temporal link prediction task, we may want to begin more temporal walks from edges closer to the current time point as the events/relationships in the distant past may be less predictive or indicative of the state of the system now. Selecting the initial temporal edge in an unbiased fashion is discussed in Section~\ref{sec:selection-of-start-time-unbiased} whereas strategies that temporally bias the selection of the initial edge are discussed in Section~\ref{sec:selection-of-start-time-biased}. In the case of learning CTDNEs in an online fashion, we do not need to select the initial edge since we simply sample a number of temporal walks that end at the new edge. See Section~\ref{sec:time-preserving-embeddings} for more details on learning CTDNEs in an online fashion. \subsubsection{Unbiased} \label{sec:selection-of-start-time-unbiased} In the case of initial edge selection, each edge $e_i=(v,u,t) \in E_T$ has the same probability of being selected: \begin{equation}\label{eq:uniform-edge} \Pr(e) = 1 / |E_T| \end{equation}\noindent This corresponds to selecting the initial temporal edge using a uniform distribution. \subsubsection{Biased} \label{sec:selection-of-start-time-biased} We describe two techniques to temporally bias the selection of the initial edge that determines the start of the temporal random walk. In particular, we select the initial temporal edge using a temporally weighted distribution based on exponential and linear functions. However, the proposed continuous-time dynamic network embedding framework is flexible with many interchangeable components and therefore can easily support other temporally weighted distributions for selecting the initial temporal edge. \medskip\noindent\textbf{Exponential:} We can also bias initial edge selection using an exponential distribution, in which case each edge $e \in E_T$ is assigned the probability: \begin{equation}\label{eq:exponential-dist} \Pr(e) = \frac{\exp\big[ \mathcal{T}(e)-t_{\min}]}{\sum_{e^\prime \in E_T} \, \exp\big[ \mathcal{T}(e^\prime)-t_{\min}]} \end{equation}\noindent where $t_{\min}$ is the minimum time associated with an edge in the dynamic graph. This defines a distribution that heavily favors edges appearing later in time. \medskip\noindent\textbf{Linear:} When the time difference between two time-wise consecutive edges is large, it can sometimes be helpful to map the edges to discrete time steps. Let $\eta : E_T \rightarrow \mathbb{Z}^{+}$ be a function that sorts (in ascending order by time) the edges in the graph. In other words $\eta$ maps each edge to an index with $\eta(e) = 1$ for the earliest edge $e$. In this case, each edge $e \in \eta(E_T)$ will be assigned the probability: \begin{equation}\label{eq:linear-dist} \Pr(e) = \frac{\eta(e)}{\sum_{e^\prime \in E_T} \eta(e^\prime)} \end{equation}\noindent \textcolor{red}{ See Figure~\ref{fig-initial-edge-selection-fb-forum} for examples of the uniform, linear, and exponential variants. } \subsection{Temporal Random Walks} \label{sec:temporal-random-walk} \noindent After selecting the initial edge $e_i = (u, v, t)$ at time $t$ to begin the temporal random walk (Section~\ref{sec:selection-of-start-time}) using $\mathbb{F}_s$, how can we perform a temporal random walk starting from that edge? We define the set of temporal neighbors of a node $v$ at time $t$ as follows: \begin{Definition}[\sc Temporal Neighborhood]\label{def:temporal-neighbor} The set of temporal neighbors of a node $v$ at time $t$ denoted as $\Gamma_t(v)$ are: \begin{equation}\label{eq:potential-neighbors-at-time-t} \Gamma_t(v) = \big\{(w, t^\prime) \,\, | \,\, e=(v,w, t^\prime) \in E_T \, \wedge \mathcal{T}(e) > t \big\} \end{equation} \end{Definition} \noindent Observe that the same neighbor $w$ can appear multiple times in $\Gamma_t(v)$ since multiple temporal edges can exist between the same pair of nodes. See Figure~\ref{fig:temporal-neighbors} for an example. The next node in a temporal random walk can then be chosen from the set $\Gamma_t(v)$. Here we use a second distribution $\mathbb{F}_\Gamma$ to \emph{temporally bias} the neighbor selection. Again, this distribution can either be uniform, in which case no bias is applied, or more intuitively biased to consider time. For instance, we may want to bias the sampling strategy towards walks that exhibit smaller ``in-between" time for consecutive edges. That is, for each consecutive pair of edges $(u, v, t)$, and $(v, w, t+k)$ in the random walk, we want $k$ to be small. For temporal link prediction on a dynamic social network, restricting the ``in-between" time allows us to sample walks that do not group friends from different time periods together. As an example, if $k$ is small we are likely to sample the random walk sequence $(v_1, v_2, t), (v_2, v_3, t+k)$ which makes sense as $v_1$ and $v_3$ are more likely to know each other since $v_2$ has interacted with them both recently. On the other hand, if $k$ is large we are unlikely to sample the sequence. This helps to separate people that $v_2$ interacted with during very different time periods (\textit{e.g.} high-school and graduate school) as they are less likely to know each other. \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[h!] \centering \begin{center} \subfigure[Neighborhood $\Gamma(v_2)$] {\label{fig:neighborhood-example} \scalebox{0.55}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, inactive node/.style={circle,draw=gray!150,fill=white,draw,font=\sffamily\Large\bfseries,text=gray!150}] \node[main node] (2) {$\mathbf{v_3}$}; \node[main node] (1) [below left of=2] {$\mathbf{v_2}$}; \node[main node] (3) [left of=1] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_5}$}; \node[main node] (5) [right of=1] {$\mathbf{v_4}$}; \node[main node] (6) [above left of=1] {$\mathbf{v_8}$}; \node[main node] (8) [below left of=1] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge [thick,line width=0.6mm,left] node [above left] {\textbf{t=6}} (1) (1) edge [right] node[above right] {} (6) (1) edge [right] node[above left] {} (8) (1) edge [right] node[above right] {} (5) (1) edge [left] node[below left] {} (4) (1) edge [right] node[above left] {} (2); \end{tikzpicture} } } \hspace{4mm} \subfigure[Temporal neigh. $\Gamma_{t}(v_2)$] {\label{fig:temporal-neighborhood-example} \scalebox{0.55}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, inactive node/.style={circle,draw=gray!150,fill=white,draw,font=\sffamily\Large\bfseries,text=gray!150}] \node[main node] (2) {$\mathbf{v_3}$}; \node[main node] (1) [below left of=2] {$\mathbf{v_2}$}; \node[main node] (3) [left of=1] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_5}$}; \node[main node] (5) [right of=1] {$\mathbf{v_4}$}; \node[inactive node] (6) [above left of=1] {$\mathbf{v_8}$}; \node[inactive node] (8) [below left of=1] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge [thick,line width=0.6mm,left] node [above left] {\textbf{t=6}} (1) (1) edge [draw=gray!150,text=black, dashed,right] node[above right] {4} (6) (1) edge [draw=gray!150,text=black, dashed,right] node[above left] {1} (8) (1) edge [right] node[above right] {7} (5) (1) edge [left] node[below left] {9} (4) (1) edge [right] node[above left] {8,10} (2); \end{tikzpicture} } } \end{center} \vspace{-4mm} \caption{ Temporal neighborhood of a node $v_2$ at time $t=6$ denoted as $\Gamma_t(v_2)$. Notice that $\Gamma_t(v_2) = \{v_4, v_3, v_5, v_3\}$ is an ordered multiset where the temporal neighbors are sorted in ascending order by time with the nodes more recent appearing first. Moreover, the same node can appear multiple times (\emph{e.g.}, a user sends another user multiple emails, or an association/event occurs multiple times between the same entities). This is in contrast to the definition of neighborhood used by previous work that is not parameterized by time, \emph{e.g.}, $\Gamma(v_2) = \{v_3, v_4, v_5, v_6, v_8\}$ or $\Gamma(v_2) = \{v_3, v_3, v_4, v_5, v_6, v_8\}$ if multigraphs are supported. } \label{fig:temporal-neighbors} \vspace{-2mm} \end{figure} \subsubsection{Unbiased} \label{sec:temporal-random-walk-unbiased} For unbiased temporal neighbor selection, given an arbitrary edge $e = (u, v, t)$, each temporal neighbor $w \in \Gamma_t(v)$ of node $v$ at time $t$ has the following probability of being selected: \begin{equation}\label{eq:uniform-neighbor} \Pr(w) = 1 / |\Gamma_t(v)| \end{equation}\noindent \subsubsection{Biased} \label{sec:temporal-random-walk-biased} We describe two techniques to bias the temporal random walks by sampling the next node in a temporal walk via temporally weighted distributions based on exponential and linear functions. However, the continuous-time dynamic network embedding framework is flexible and can easily be used with other application or domain-dependent \emph{temporal bias functions}. \medskip\noindent\textbf{Exponential:} When exponential decay is used, we formulate the probability as follows. Given an arbitrary edge $e = (u, v, t)$, each temporal neighbor $w \in \Gamma_t(v)$ has the following probability of being selected: \begin{equation}\label{eq:exponential-penalty} \Pr(w) = \frac{\exp\!\big[ \tau(w) - \tau(v)\big]}{\sum_{w^\prime \in \Gamma_t(v)} \exp\!\big[ \tau(w^\prime) - \tau(v) \big]} \end{equation}\noindent Note that we abuse the notation slightly here and use $\tau$ to mean the mapping to the corresponding time. This is similar to the exponentially decaying probability of consecutive contacts observed in the spread of computer viruses and worms~\cite{holme2012temporal}. \medskip\noindent\textbf{Linear:} Here, we define $\delta : V \times \RR^{+} \rightarrow \mathbb{Z}^{+}$ as a function which sorts temporal neighbors in descending order time-wise. The probability of each temporal neighbor $w \in \Gamma_t(v)$ of node $v$ at time $t$ is then defined as: \begin{equation}\label{eq:linear-penalty} \Pr(w) = \frac{\delta(w)}{\sum_{w^\prime \in \Gamma_t(v)} \delta(w^\prime)} \end{equation}\noindent This distribution biases the selection towards edges that are closer in time to the current node. \subsubsection{Temporal Context Windows} Since temporal walks preserve time, it is possible for a walk to run out of \emph{temporally valid} edges to traverse. Therefore, we do not impose a strict length on the temporal random walks. Instead, we simply require each temporal walk to have a minimum length $\omega$ (in this work, $\omega$ is equivalent to the context window size for skip-gram \cite{skipgram-old}). A maximum length $L$ can be provided to accommodate longer walks. A temporal walk $\mathcal{S}_{t_i}$ with length $|\mathcal{S}_{t_i}|$ is considered valid iff \[ \omega \leq |\mathcal{S}_{t_i}| \leq L \] Given a set of temporal random walks $\{ \mathcal{S}_{t_1}, \mathcal{S}_{t_2}, \cdots, \mathcal{S}_{t_k}\}$, we define the temporal context window count $\beta$ as the total number of context windows of size $\omega$ that can be derived from the set of temporal random walks. Formally, this can be written as: \begin{equation} \label{eq:stopping-criterion} \beta \, = \sum_{i=1}^{k} \big( |\mathcal{S}_{t_i}| - \omega + 1\big) \end{equation} \noindent When deriving a set of temporal walks, we typically set $\beta$ to be a multiple of $N = |V|$. Note that this is only an implementation detail and is not important for Online CTDNEs. \begin{figure*}[t!] \centering \includegraphics[width=0.28\linewidth]{fig3.pdf} \hspace{4mm} \includegraphics[width=0.28\linewidth]{fig4.pdf} \hspace{4mm} \includegraphics[width=0.28\linewidth]{fig5.pdf} \vspace{-1mm} \caption{Frequency of \emph{temporal random walks} by length} \label{fig:temporal-walk-length-freq} \end{figure*} \subsection{Learning Dynamic Node Embeddings} \label{sec:time-preserving-embeddings} \noindent Given a temporal walk $\mathcal{S}_{t}$, we can now formulate the task of learning time-preserving node embeddings in a CTDN as the optimization problem: \begin{align} \label{eq:obj-func} \max_{f} \; \log \Pr \big(\,W_T = \{v_{i-\omega},\cdots,v_{i+\omega} \} \setminus v_i \;|\; f(v_i) \big) \end{align}\noindent where $f : V \rightarrow \RR^{D}$ is the node embedding function, $\omega$ is the context window size for optimization, and \[ W_T = \{v_{i-\omega},\cdots,v_{i+\omega} \} \]\noindent such that \[ \mathcal{T}(v_{i-\omega},v_{i-\omega+1}) < \cdots < \mathcal{T}(v_{i+\omega-1},v_{i+\omega}) \]\noindent is an arbitrary temporal context window $W_{T} \subseteq S_t$. For tractability, we assume conditional independence between the nodes of a temporal context window when observed with respect to the source node $v_i$. That is: \begin{align} \label{eq:conditional-indep} \Pr \big(\,W_T | f(v_i) \big) = \prod_{v_{i+k} \in W_T} \Pr \big(v_{i+k} | f(v_i) \big) \end{align} \noindent We can model the conditional likelihood of every source-nearby node pair $(v_i, v_j)$ as a softmax unit parameterized by a dot product of their feature vectors: \begin{align}\label{eq:cond-ll} \Pr \big(\,v_j | f(v_i) \big) = \frac{\exp\!\big[ f(v_j) \cdot f(v_i)\big]}{\sum_{v_k \in V} \exp\!\big[ f(v_k) \cdot f(v_i) \big]} \end{align}\noindent Using Eq.~\ref{eq:conditional-indep}-~\ref{eq:cond-ll}, the optimization problem in Eq.~\ref{eq:obj-func} reduces to: \begin{align}\label{eq:obj-func-simplifies} \max_{f} \; \sum_{v_i \in V} \Bigg( - \log Z_i + \sum_{v_{j} \in W_T} f(v_j) \cdot f(v_i) \Bigg) \end{align}\noindent where the term $Z_i = \sum_{v_j \in V} \exp\!\big[ f(v_i) \cdot f(v_j) \big]$ can be approximated by negative sampling. Given a graph $G$, let $\mathbb{S}$ be the space of all possible random walks on $G$ and let $\mathbb{S}_{T}$ be the space of all temporal random walks on $G$. It is straightforward to see that the space of temporal random walks $\mathbb{S}_{T}$ is contained within $\mathbb{S}$, and $\mathbb{S}_{T}$ represents only a tiny fraction of possible random walks in $\mathbb{S}$. Existing methods sample a set of random walks $\mathcal{S}$ from $\mathbb{S}$ whereas this work focuses on sampling a set of \emph{temporal random walks} $\mathcal{S}_t$ from $\mathbb{S}_{T} \subseteq \mathbb{S}$ (Fig.~\ref{fig:space-of-random-walks}). In general, the probability of an existing method sampling a temporal random walk from $\mathbb{S}$ by chance is extremely small and therefore the vast majority of random walks sampled by these methods represent sequences of events between nodes that are invalid (not possible) when time is respected. \smallskip \begin{Claim} Fix $L>0$, then $|\mathbb{S}| \gg |\mathbb{S}_D| \gg |\mathbb{S}_T|$. \end{Claim} \smallskip \noindent Therefore, previous methods that learn embeddings from random walks are unlikely to generate \emph{temporally valid sequences} of events/interactions between nodes that are actually possible when time is respected. {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[h!] \vspace{-2mm} \centering \begin{algorithm}[H] \caption{\,\small Continuous-Time Dynamic Network Embeddings } \label{alg:temporal-node2vec} { \begin{spacing}{1.15} \fontsize{7.5}{8.5}\selectfont \begin{algorithmic}[1] \vspace{-1.3mm} \Require a dynamic network (graph stream) $G = (V,\E_T,\mathcal{T})$, temporal context window count $\beta$, context window size $\omega$, embedding dimensions $D$ \smallskip \State Initialize number of temporal context windows $C = 0$ \While {$\beta - C > 0$ } \State Sample an edge $e_{t}\!=\!(v,u)$ via $\mathbb{F}_s$ (or use new edge at time $t$) \State $t \leftarrow \mathcal{T}(e_{t})$ \State $S_t = \textsc{TemporalWalk}(G, e_{t}, t, L, \omega + \beta - C - 1)$ \label{algline:obtain-temporal-walk} \If {$|S_t| > \omega$} \State Add the \emph{temporal walk} $S_t$ to $\mathcal{S}_T$ \label{algline:add-temporal-walk-to-set} \State $C \leftarrow C + (|S_t| - \omega + 1)$ \EndIf \EndWhile \State $\mZ = \textsc{StochasticGradientDescent}(\omega, D, \mathcal{S}_T)$ \label{algline:SGD-with-temporal-walks} \Comment{update embeddings} \State \textbf{return} \emph{dynamic} node embeddings $\mZ$ \label{algline:return-learned-representation-matrix} \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-2mm} \end{figure}} {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[h!] \vspace{-9.2mm} \begin{algorithm}[H] \caption{\,\small Temporal Random Walk } \label{alg:temporal-random-walk}{ \begin{spacing}{1.15} \fontsize{7.5}{8.5}\selectfont \begin{algorithmic}[1] \vspace{-1.3mm} \Procedure{TemporalWalk}{$G^{\prime}$, $e=(s,r)$, $t$, $C$} \State Set $i \leftarrow s$ and initialize temporal walk $S_t = \big[\, s, r \,\big]$ \label{algline:temporal-walk-init-walk-and-add-start-node-function} \For{$p = 1$ {\bf to} $C - 1$} \label{algline:temporal-walk-for} \State $\Gamma_t(i) = \big\{(w, t^\prime) \,\, | \,\, e=(i,w, t^\prime) \in E_T \, \wedge \mathcal{T}(i) > t \big\} $ \label{algline:temporal-walk-get-neighbors} \If {$|\Gamma_t(i)| > 0$} \State Select node $j$ from distribution $\mathbb{F}_\Gamma (\Gamma_t(i))$ \label{algline:temporal-walk-alias-sample} \State Append $j$ to $S_t$ \label{algline:temporal-walk-add-node-function-to-list} \State Set $t \leftarrow \mathcal{T}(i,j)$ and set $i \leftarrow j$ \Else \; terminate temporal walk \EndIf \EndFor \label{algline:temporal-walk-for-end} \State \textbf{return} temporal walk $S_t$ of length $|S_t|$ rooted at node $s$ \label{algline:temporal-walk-return-temporal-walk} \EndProcedure \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-2mm} \end{figure} } We summarize the procedure to learn time-preserving embeddings for CTDNs in Algorithm~\ref{alg:temporal-node2vec}. Our procedure in Algorithm~\ref{alg:temporal-node2vec} generalizes the Skip-Gram architecture to learn continuous-time dynamic network embeddings (CTDNEs). However, the framework can easily be used for other deep graph models that leverage random walks (\emph{e.g.},~\cite{lee17-Deep-Graph-Attention}) as the temporal walks can serve as input vectors for neural networks. There are many methods that can be adapted to learn CTDN embeddings using \emph{temporal random walks} (\emph{e.g.}, node2vec~\cite{node2vec}, struc2vec~\cite{struc2vec}, role2vec~\cite{role2vec}) and the proposed framework is not tied to any particular approach. We point out that Algorithm~\ref{alg:temporal-node2vec} is useful for prediction tasks where the goal is to learn a model using all data up to time $t$ for prediction of a future discrete or real-valued attribute or state (\emph{e.g.}, if a link exists or not). Since this work evaluates CTDNEs for link prediction, we include it mainly for the reader to understand one evaluation strategy using CTDNE. However, other applications may require online incremental learning and updating of the embeddings in a streaming fashion as new edges arrive. Recall that CTDNE naturally supports such streaming settings where edges (or new nodes) arrive continuously over time~\cite{ahmed17streams} and the goal is to update the embeddings in real-time via fast efficient updates. In Algorithm~\ref{alg:CTDNE-online}, we present an online CTDNE learning framework for incrementally updating the node embeddings as new edges arrive over time from the edge stream. Consider an edge stream $e_1, e_2, \ldots, e_k,\ldots, e_{t-1}, e_{t}, \ldots$ with timestamped edges. Suppose a new edge $(v,u,t)$ arrives at time $t$ from the edge stream (Line~\ref{algline:online-CTDNE-while-edge-arrives}). Then we immediately update the graph by adding the edge $(v,u,t)$ to $E \leftarrow E \cup \{(v,u,t)\}$ as shown in Line~\ref{algline:online-CTDNE-add-edge-and-nodes-if-needed}.\footnote{At this point, we can also remove any stale edges as well, \emph{e.g.}, edges that occurred in the distant past defined by some $\Delta t$.} If either $v$ or $u$ are new nodes, \emph{i.e.}, $v \not\in V$ or $u \not\in V$, then we simply set $V \leftarrow V \cup \{v,u\}$. Notice that if $v,u \in V$ then $V \leftarrow V \cup \{v,u\}$ in Line~\ref{algline:online-CTDNE-add-edge-and-nodes-if-needed} has no impact. The next step is to sample a set of temporal walks $\mathcal{S}_{t}$ with the constraint that each temporal walk ends at the new edge $(v,u,t)$ from the edge stream (Line~\ref{algline:online-CTDNE-sample-temporal-walks}). We obtain temporal walks that end in $(v,u,t)$ by reversing the temporal walk and going backwards through time as shown in Figure~\ref{fig:online-temporal-walk}. This enables us to easily obtain a set of temporal walks that include the new edge, which will be used for incrementally updating the embeddings. Furthermore, since the goal is to obtain temporal walks that include the new edge, then we know $(v,u,t)$ will be at the end of the temporal walk (since by definition no other edge could have appeared after it), and we simply obtain the temporal walk by going backwards through time. Finally, we incrementally update the appropriate node embeddings using only the sampled temporal walks $\mathcal{S}_{t}$ ending at $(v,u,t)$ at time $t$ (Line~\ref{algline:online-CTDNE-update-embeddings}). In this work, we use online SGD updates (online word2vec)~\cite{kaji2017incremental,peng2017incrementally,luo2015online,li2017psdvec} to incrementally learn the embeddings as new edges arrive. However, other incremental optimization schemes can easily be used as well~\cite{duchi2011adaptive,flaxman2005online,zhao2012fast,schraudolph2007stochastic,ge2015escaping,ying2008online}. While Algorithm~\ref{alg:CTDNE-online} assumes the graph stream is infinite, the current and most recently updated embeddings $\vz_1, \vz_2, \ldots, \vz_N$ can be obtained at any time $t$. Concept drift is naturally handled by the framework since we incrementally update embeddings upon the arrival of each edge in the stream using walks that are temporally valid. Hence, the context and resulting embedding of a node changes temporally as the graph evolves over time. Furthermore, we can relax the requirement of updating the embeddings after every new edge, and instead, we can wait until a fixed number of edges arrive before updating the embeddings or wait until a fixed amount of time elapses. We call such an approach batched CTDNE updating. The only difference in Algorithm~\ref{alg:CTDNE-online} is that instead of performing an update immediately, we would wait until one of the above conditions become true and then perform a batch update. We can also drop edges that occur in the distant past or that have a very small weight. {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[t!] \vspace{-2mm} \centering \begin{algorithm}[H] \caption{\, Online Learning of Node Embeddings from Edge Streams (Online CTDNE) } \label{alg:CTDNE-online} { \begin{spacing}{1.15} \fontsize{8.0}{9.0}\selectfont \begin{algorithmic}[1] \vspace{-0.5mm} \Require a dynamic network (graph stream) $G$, embedding dimensions $D$ \Ensure dynamic node embeddings $\mZ$ at time $t$ \smallskip \While{new edge $(v,u,t)$ arrives at time $t$ from edge stream} \label{algline:online-CTDNE-while-edge-arrives} \State Add edge $(v,u,t)$ to $E \leftarrow E \cup \{(v,u,t)\}$ and $V \leftarrow V \cup \{v,u\}$ \label{algline:online-CTDNE-add-edge-and-nodes-if-needed} \State Sample temporal walks $\mathcal{S}_{t}$ ending in edge $(v,u,t)$ \label{algline:online-CTDNE-sample-temporal-walks} \State Update embeddings via online SGD/word2vec using only $\mathcal{S}_{t}$ \label{algline:online-CTDNE-update-embeddings} \EndWhile \vspace{0.2mm} \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-7mm} \end{figure} } \vspace{-4mm} \subsection{Hyperparameters} \noindent While other methods have a lot of hyperparameters that require tuning such as node2vec~\cite{node2vec}, the proposed framework has a single hyperparameter that requires tuning. Note that since the framework is general and flexible with many interchangeable components, there is of course the possibility of introducing additional hyperparameters depending on the approaches used to bias the temporal walks. \medskip\noindent\textbf{Arbitrary temporal walk length}: Unlike walks in static graphs, temporal walks in the proposed framework can be of any arbitrary length. In particular, the user does not need to select the length of the walks to sample as required by static embedding methods~\cite{node2vec,deepwalk}, among the many other hyperparameters required by such methods. As an aside, the temporal context size $\omega$ is not specific to the framework, but arises from the base embedding method that we used. For instance, suppose node2vec/deepwalk is used as the base embedding method in the proposed framework, then $\omega$ is simply the context/window size, and therefore, the only requirement on the length of the walk is that it is at least as large as $\omega$, which ensures at least one temporal context can be generated from it. This is obviously better than node2vec/deepwalk, which requires selecting at least $L$, $R$, and $\omega$. Figure~\ref{fig:node-occur-temporal-walks} investigates the number of times each node appears in the sampled temporal walks. We also study the frequency of starting a temporal random walk from each node in Figure~\ref{fig:node-starting-temporal-walk-freq}. \begin{figure}[t!] \centering \includegraphics[width=0.46\linewidth]{fig6.pdf} \hfill \includegraphics[width=0.46\linewidth]{fig8.pdf} \vspace{-1mm} \caption{Number of occurrences of each node in the set of sampled temporal walks.} \label{fig:node-occur-temporal-walks} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.46\linewidth]{fig9.pdf} \hfill \includegraphics[width=0.46\linewidth]{fig11.pdf} \vspace{-1mm} \caption{Frequency of starting a temporal random walk from each node. Unlike previous approaches that sample a fixed number of random walks for each node, the proposed framework samples an edge between two nodes to obtain a timestamp to begin the temporal random walk. } \label{fig:node-starting-temporal-walk-freq} \end{figure} \section{Theoretical Analysis} \label{sec:complexity} \noindent Let $N=|V|$ denote the number of nodes, $M=|E_T|$ be the number of edges, $D = $ dimensionality of the embedding, $R = $ the number of temporal walks per node, $L = $ the maximum length of a temporal random walk, and $\Delta = $ the maximum degree of a node. Recall that while $R$ is not required, we use it here since the number of temporal random walks $|\mathcal{S}_T|$ is a multiple of the number of nodes $N=|V|$ and thus can be written as $RN$ similar to previous work. \subsection{Time Complexity} \noindent \begin{Lemma} The time complexity for learning CTDNEs using the generalized Skip-gram architecture in Section~\ref{sec:time-preserving-embeddings} is \begin{equation}\label{eq:time-complexity-CTDNE-biased} \mathcal{O}(M + N (R \log M + R{L}\Delta + D)) \end{equation}\noindent and the time complexity for learning CTDNEs with \emph{unbiased} temporal random walks (uniform) is: \begin{equation}\label{eq:time-complexity-DTDNE-biased} \mathcal{O}(N (R \log M + R{L}\log \Delta + D)) \end{equation}\noindent \end{Lemma} \noindent \noindent\textsc{Proof}. The time complexity of each of the three steps is provided below. We assume the exponential variant is used for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ since this CTDNE variant is the most computationally expensive among the nine CTDNE variants expressed from using uniform, linear, or exponential for $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$. Edges are assumed to be ordered by time such that $\mathcal{T}(e_1) \leq \mathcal{T}(e_2) \leq \cdots \leq \mathcal{T}(e_{M})$. Similarly, the neighbors of each node are also ordered by time. \textbf{Initial Temporal Edge Selection:} To derive $\mathbb{F}_s$ for any of the variants used in this work (uniform, linear, exponential) it takes $\mathcal{O}(M)$ time since each variant can be computed with a single or at most two passes over the edges. Selecting an initial edge via $\mathbb{F}_s$ takes $\mathcal{O}(\log M)$ time. Now $\mathbb{F}_s$ is used to select the initial edge for each temporal random walk $S_{t} \in \mathcal{S}_T$ and thus an initial edge is selected $RN=|\mathcal{S}_T|$ times. This gives a total time complexity of $\mathcal{O}(M + RN \log M)$.\footnote{Note for uniform initial edge selection, the time complexity is linear in the number of temporal random walks $\mathcal{O}(RN)$.} \textbf{Temporal Random Walks:} After the initial edge is selected, the next step is to select the next temporally valid neighbor from the set of temporal neighbors $\Gamma_{t}(v)$ of a given node $v$ at time $t$ using a (weighted) distribution $\mathbb{F}_{\Gamma}$ (\emph{e.g.}, uniform, linear, exponential). Note $\mathbb{F}_{\Gamma}$ must be computed and maintained for each node. Given a node $v$ and a time $t_{*}$ associated with the previous edge traversal in the temporal random walk, the first step in any variant (uniform, linear, exponential; Section~\ref{sec:temporal-random-walk}) is to obtain the ordered set of temporal neighbors $\Gamma_{t}(v) \subseteq \Gamma(v)$ of node $v$ that occur after $t_{*}$. Since the set of all temporal neighbors is already stored and ordered by time, we only need to find the index of the neighbor $w \in \Gamma(v)$ with time $t>t_{*}$ as this gives us $\Gamma_{t}(v)$. Therefore, $\Gamma_{t}(v)$ is derived in $\log |\Gamma(v)|$ via a binary search over the ordered set $\Gamma(v)$. In the worst case, $\mathcal{O}(\log \Delta)$ where $\Delta = \max_{v \in V} |\Gamma(v)|$ is the maximum degree. After obtaining $\Gamma_{t}(v) \subseteq \Gamma(v)$, we derive $\mathbb{F}_{\Gamma}$ in $\mathcal{O}(\Delta)$ time when $d_v = \Delta$. Now, selecting the next temporally valid neighbor according to $\mathbb{F}_{\Gamma}$ takes $\mathcal{O}(\log \Delta)$ for exponential and linear and $o(1)$ for uniform. For the uniform variant, we select the next temporally valid neighbor in $o(1)$ constant time by $j \sim \textrm{UniformDiscrete}\{1,2,\ldots,|\Gamma_t(v)|\}$ and then obtain the selected temporal neighbor by directly indexing into $\Gamma_t(v)$. Therefore, the time complexity to select the next node in a biased temporal random walk is $\mathcal{O}(\log \Delta + \Delta) = \mathcal{O}(\Delta)$ in the worst case and $\mathcal{O}(\log \Delta)$ for unbiased (uniform). For a temporal random walk of length ${L}$, the time complexity is $\mathcal{O}({L}\Delta)$ for a biased walk with linear/exponential and $\mathcal{O}({L} \log \Delta)$ for an unbiased walk. Therefore, the time complexity for $RN$ biased temporal random walks of length ${L}$ is $\mathcal{O}(RN{L}\Delta)$ in the worst case and $\mathcal{O}(RN{L}\log \Delta)$ for unbiased. \textbf{Learning Time-dependent Embeddings:} For the Skip-Gram-based generalization given in Section~\ref{sec:time-preserving-embeddings}, the time complexity per iteration of Stochastic Gradient Descent (SGD) is $\mathcal{O}(ND)$ where $D \ll N$. While the time complexity of a single iteration of SGD is less than a single iteration of Alternating Least Squares (ALS)~\cite{pilaszy2010fast}, SGD requires more iterations to obtain a good enough model and is sensitive to the choice of learning rate~\cite{yun2014nomad,oh2015fast}. Moreover, SGD is more challenging to parallelize compared to ALS~\cite{pilaszy2010fast} or Cyclic Coordinate Descent (CCD)~ \cite{kim2014algorithms,rossi2015dsaa-pcmf}. Nevertheless, the choice of optimization scheme depends on the objective function of the embedding method generalized via the CTDNE framework. \subsection{Space Complexity} \noindent Storing the $F_{s}$ distribution takes $\mathcal{O}(M)$ space. The temporal neighborhoods do not require any additional space (as we simply store an index). Storing $\mathbb{F}_{\Gamma}$ takes $\mathcal{O}(\Delta)$ (which can be reused for each node in the temporal random walk). The embedding matrix $\mZ$ takes $\mathcal{O}(ND)$ space. Therefore, the space complexity of CTDNEs is $\mathcal{O}(M + ND + \Delta) = \mathcal{O}(M + ND)$. This obviously holds in the online stream setting where edges arrive continuously over time and updates are made in an online fashion since this is a special case of the more general CTDNE setting. \begin{table}[b!] \vspace{-3mm} \centering \renewcommand{\arraystretch}{1.15} \fontsize{8}{9}\selectfont \setlength{\tabcolsep}{6.0pt} \caption{Dynamic Network Data and Statistics.} \vspace{-2.5mm} \label{table:dynamic-network-stats} \begin{tabular}{r l ll c H@{}} \multicolumn{6}{@{}p{0.94\linewidth}}{\footnotesize Let $|E_T|$ = number of \emph{temporal edges}; $\bar{d}$ = average temporal node degree; and $d_{\max}$ = max temporal node degree. } \\ \toprule & & & & \textbf{Timespan} \\ \textbf{Dynamic Network} & $|E_T|$ & $\bar{d}$ & $d_{\max}$ & \textbf{(days)} \\ \midrule \text{ia-contact} & 28.2K & 206.2 & 2092 & 3.97 \\ \text{ia-hypertext} & 20.8K & 368.5 & 1483 & 2.46 \\ \text{ia-enron-employees} & 50.5K & 669.8 & 5177 & 1137.55 \\ \text{ia-radoslaw-email} & 82.9K & 993.1 & 9053 & 271.19 \\ \text{ia-email-EU} & 332.3K & 674.1 & 10571 & 803.93 \\ \text{ fb-forum} & 33.7K & 75.0 & 1841 & 164.49 \\ \text{soc-bitcoinA} & 24.1K & 12.8 & 888 & 1901.00 \\ \text{soc-wiki-elec} & 107K & 30.1 & 1346 & 1378.34 \\ \bottomrule \end{tabular} \end{table} \section{Experiments} \label{sec:exp} \noindent The experiments are designed to investigate the effectiveness of the proposed \emph{continuous-time dynamic network embeddings} (CTDNE) framework for prediction. To ensure the results and findings of this work are significant and meaningful, we investigate a wide range of temporal networks from a variety of application domains with fundamentally different structural and temporal characteristics. A summary of the dynamic networks used for evaluation and their statistics are provided in Table~\ref{table:dynamic-network-stats}. All networks investigated are continuous-time dynamic networks with $\mathbb{T} = \RR^{+}$. For these dynamic networks, the time scale of the edges is at the level of seconds or milliseconds, \emph{i.e.}, the edge timestamps record the time an edge occurred at the level of seconds or milliseconds (finest granularity given as input). Our approach uses the finest time scale available in the graph data as input. All data is from NetworkRepository~\cite{nr} and is easily accessible for reproducibility. We designed the experiments to answer four important questions. First, are \emph{continuous-time dynamic network embeddings} (CTDNEs) more useful than embeddings from methods that ignore time? Second, how do the different embedding methods from the CTDNE framework compare? Third, are CTDNEs better than embeddings learned from a sequence of discrete snapshot graphs that approximate the edge stream (DTNE methods)? Finally, can we incrementally learn node embeddings fast using the online CTDNE framework? \subsection{Experimental setup} \noindent Since this work is the first to learn embeddings over an edge stream (CTDN), there are no methods that are directly comparable. Nevertheless, we first compare CTDNE against node2vec~\cite{node2vec}, DeepWalk~\cite{deepwalk}, and LINE~\cite{line}. For node2vec, we use the same hyperparameters ($D=128$, $R=10$, $L=80$, $\omega = 10$) and grid search over $p,q\in \{0.25, 0.50, 1, 2, 4\}$ as mentioned in~\cite{node2vec}. The same hyperparameters are used for DeepWalk (with the exception of $p$ and $q$). Unless otherwise mentioned, CTDNE methods use $\omega = 10$ and $D=128$. For LINE, we also use $D=128$ with 2nd-order-proximity and number of samples $T=$ 60 million. \begin{table}[h!] \centering \small \fontsize{8}{9}\selectfont \renewcommand{\arraystretch}{1.15} \setlength{\tabcolsep}{2.0pt} \caption{Results for Temporal Link Prediction (AUC).} \label{table:link-pred-results} \vspace{-2.4mm} \begin{tabularx}{1.00\linewidth}{r cc X c r} \toprule \textbf{Dynamic Network} & \textbf{DeepWalk} & \textbf{Node2Vec} & \textbf{LINE} & \textbf{CTDNE} & (\textsc{Gain}) \\ \midrule \text{ia-contact} & \text{0.845} & \textrm{0.874} & \textrm{0.736} & \textbf{0.913} & (\text{+10.37\%}) \\ \text{ia-hypertext} & \text{0.620} & \textrm{0.641} & \textrm{0.621} & \textbf{0.671} & (\text{+6.51\%}) \\ \text{ia-enron-employees} & \textrm{0.719} & \textrm{0.759} & \textrm{0.550} & \textbf{0.777} & (\text{+13.00\%}) \\ \text{ia-radoslaw-email} & \textrm{0.734} & \textrm{0.741} & \textrm{0.615} & \textbf{0.811} & (\text{+14.83\%}) \\ \text{ia-email-EU} & \textrm{0.820} & \textrm{0.860} & \textrm{0.650} & \textbf{0.890} & (\text{+12.73\%}) \\ \text{fb-forum} & \textrm{0.670} & \textrm{0.790} & \textrm{0.640} & \textbf{0.826} & (\text{+15.25\%}) \\ \text{soc-bitcoinA} & \textrm{0.840} & \textrm{0.870} & \textrm{0.670} & \textbf{0.891} & (\text{+10.96\%}) \\ \text{soc-wiki-elec} & \textrm{0.820} & \textrm{0.840} & \textrm{0.620} & \textbf{0.857} & (\text{+11.32\%}) \\ \bottomrule \multicolumn{6}{l}{\footnotesize $^{\star}$\textsc{Gain} = mean gain in AUC averaged over all embedding methods.} \\ \end{tabularx} \vspace{-2mm} \end{table} \subsection{Comparison} \label{sec:comparison} \noindent We evaluate the performance of the proposed framework on the temporal link prediction task. To generate a set of labeled examples for link prediction, we first sort the edges in each graph by time (ascending) and use the first $75\%$ for representation learning. The remaining $25\%$ are considered as positive links and we sample an equal number of negative edges randomly. Since the temporal network is a multi-graph where an edge between two nodes can appear multiple times with different timestamps, we take care to ensure edges that appear in the training set do not appear in the test set. We perform link prediction on this labeled data $\mathcal{X}$ of positive and negative edges. After the embeddings are learned for each node, we derive edge embeddings by combining the learned embedding vectors of the corresponding nodes. More formally, given embedding vectors $\vz_i$ and $\vz_j$ for node $i$ and $j$, we derive an edge embedding vector $\vz_{ij} = \Phi(\vz_i, \vz_j)$ where \begin{equation} \label{eq:embedding-functions} \nonumber \Phi \in \big\lbrace(\vz_i + \vz_j)\big/2,\;\; \vz_i \odot \vz_j,\;\; \abs{\vz_i - \vz_j},\;\; (\vz_i - \vz_j)^{\circ 2}\big\rbrace \end{equation}\noindent and $\vz_i \odot \vz_j$ is the element-wise (Hadamard) product and $\vz^{\circ 2}$ is the Hadamard power. We use logistic regression (LR) with hold-out validation of $25\%$. Experiments are repeated for 10 random seed initializations and the average performance is reported. Unless otherwise mentioned, we use ROC AUC (denoted as AUC for short) to evaluate the models and use the same number of dimensions $D$ for all models. To compare the methods fairly, we ensure all baseline methods use the same amount of information for learning. In particular, the number of \emph{temporal context windows} is \begin{equation} \label{eq:stopping-criterion} \beta = R \times N \times (L - \omega + 1) \end{equation}\noindent where $R$ denotes the number of walks for each node and $L$ is the length of a random walk required by the baseline methods. Recall that $R$ and $L$ are \emph{not} required by CTDNE and are only used above to ensure that all methods use exactly the same amount of information for evaluation purposes. Note since CTDNE does not collect a fixed amount of random walks (of a fixed length) for each node as done by many other embedding methods~\cite{deepwalk,node2vec}, instead the user simply specifies the $\#$ of temporal context windows (expected) per node and the total number of temporal context windows $\beta$ is derived as a multiple of the number of nodes $N=|V|$. Hence, CTDNE is also easier to use as it requires a lot less hyperparameters that must be carefully tuned by the user. Observe that it is possible (though unlikely) that a node $u \in V$ is not in a valid temporal walk, \emph{i.e.}, the node does not appear in any temporal walk $S_t$ with length at least $|S_t| > \omega$. If such a case occurs, we simply relax the notion of temporal random walk for that node by ensuring the node appears in at least one random walk of sufficient length, even if part of the random walk does not obey time. As an aside, relaxing the notion of temporal random walks by allowing the walk to sometimes violate the time-constraint can be viewed as a form of regularization. Results are shown in Table~\ref{table:link-pred-results}. For this experiment, we use the simplest CTDNE variant from the proposed framework and did not apply any \emph{additional bias} to the selection strategy. In other words, both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ are set to the uniform distribution. We note, however, that since temporal walks are time-obeying (by Definition~\ref{def:temporal-walk}), the selection is already biased towards edges that appear later in time as the random walk traversal does not go back in time. In Table~\ref{table:link-pred-results}, the proposed approach is shown to perform consistently better than DeepWalk, node2vec, and LINE. This is an indication that important information is lost when temporal information is ignored. Strikingly, the CTDNE model does not leverage the bias introduced by node2vec~\cite{node2vec}, and yet still outperforms this model by a significant margin. We could have generalized node2vec in a similar manner using the proposed framework from Section~\ref{sec:framework}. Obviously, we can expect to achieve even better predictive performance by using the CTDNE framework to derive a continuous-time node2vec generalization by replacing the notion of random walks in node2vec with the notion of \emph{temporal random walks} biased by the (weighted) distributions $\mathbb{F}_s$ (Section~\ref{sec:selection-of-start-time}) and $\mathbb{F}_{\Gamma}$ (Section~\ref{sec:temporal-random-walk}). \begin{table}[h!] \vspace{-4mm} \centering \setlength{\tabcolsep}{3.0pt} \renewcommand{\arraystretch}{1.15} \small \fontsize{8}{9}\selectfont \caption{Results for Different CTDNE Variants} \label{table:variants-link-pred-results} \vspace{-2.4mm} \begin{tabularx}{1.00\linewidth}{ll c XXXX @{}} \multicolumn{7}{p{1.0\linewidth}}{\footnotesize $\mathbb{F}_s$ is the distribution for initial edge sampling and $\mathbb{F}_{\Gamma}$ is the distribution for temporal neighbor sampling. } \\ \toprule \multicolumn{2}{c}{\textsc{Variant}} \\ \multicolumn{1}{c}{\fontsize{11}{12}\selectfont $\mathbb{F}_s$} & \multicolumn{1}{c}{\fontsize{11}{12}\selectfont $\mathbb{F}_{\Gamma}$} && \multicolumn{1}{l}{\textsf{\fontsize{7.5}{8.5}\selectfont contact}} & \textsf{\fontsize{7.5}{8.5}\selectfont hyper} & \textsf{\fontsize{7.5}{8.5}\selectfont enron} & \multicolumn{1}{l}{\textsf{\fontsize{7.5}{8.5}\selectfont rado}} \\ \midrule \fontsize{8.5}{9.5}\selectfont $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.913 & 0.671 & 0.777 & 0.811 \\ $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.903 & 0.665 & 0.769 & 0.797 \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.915 & 0.675 & 0.773 & 0.818 \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.903 & 0.667 & 0.782 & 0.806 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && \textbf{0.921} & 0.681 & \textbf{0.800} & 0.820 \\ $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && 0.913 & 0.670 & 0.759 & 0.803 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.920 & \textbf{0.718} & 0.786 & \textbf{0.827} \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && 0.916 &0.681 & 0.782 & 0.823 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.914 & 0.675 & 0.747 & 0.817\\ \bottomrule \end{tabularx} \vspace{-2mm} \end{table} In all cases, the proposed approach significantly outperforms the other embedding methods across all dynamic networks (Table~\ref{table:link-pred-results}). The mean gain in AUC averaged over all embedding methods for each dynamic network is shown in Table~\ref{table:link-pred-results}. Notably, CTDNE achieves an overall gain in AUC of $11.9\%$ across all embedding methods and graphs. These results indicate that modeling and incorporating the temporal dependencies in graphs is important for learning appropriate and meaningful network representations. It is also worth noting that many other approaches that leverage random walks can also be generalized using the proposed framework~\cite{struc2vec,ComE,ASNE,dong2017metapath2vec,lee17-Deep-Graph-Attention}, as well as any future state-of-the-art embedding method. \vspace{-2mm} \subsection{Comparing Variants from CTDNE Framework} \label{sec:exp-variants} \noindent We investigate three different approaches for $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ giving rise to nine different CTDNE variants by taking all possible combinations of unbiased and biased distributions discussed in Section~\ref{sec:selection-of-start-time} and Section~\ref{sec:temporal-random-walk}. In particular, we investigated three different approaches to sample (1) the starting temporal edge $e_*$ via $\mathbb{F}_s$, and (2) each subsequent edge in a temporal random walk via $\mathbb{F}_{\Gamma}$. For learning dynamic node embeddings in an online fashion, $\mathbb{F}_s$ is not required since for each new edge $(i,j,t)$ in the graph stream, we sample a number of temporal walks ending at $(i,j)$ and use these to update the embedding. Overall, we find that using a biased distribution (\emph{e.g.}, linear or exponential) improves predictive performance in terms of AUC when compared to the uniform distribution on many graphs. For others however, there is no noticeable gain in performance. This can likely be attributed to the fact that most of the dynamic networks investigated have a relatively short time span (more than 3 years at most). Table~\ref{table:variants-link-pred-results} provides results for a few other variants from the framework. In particular, Table~\ref{table:variants-link-pred-results} shows the difference in AUC when applying a biased distribution to the initial edge selection strategy $\mathbb{F}_s$ as well as the temporal neighbor selection $\mathbb{F}_{\Gamma}$ for the temporal random walk. Interestingly, using a biased distribution for $\mathbb{F}_s$ seems to improve more on the tested datasets. However, for \text{ia-enron-employees}, the best result can be observed when both distributions are biased. \subsection{Continuous vs. Discrete Approximation-based Embeddings} \noindent We also investigate the difference between discrete-time models that learn embeddings from a sequence of discrete snapshot graphs, and the class of continuous-time embeddings proposed in this paper. \begin{Definition}[\sc DTDN Embedding] \label{def:DTDNE} A discrete-time dynamic network embedding (DTDNE) is defined as any embedding derived from a sequence of discrete static snapshot graphs $\mathcal{G} = \{G_1,G_2,\ldots,G_t\}$. This includes any embedding learned from temporally smoothed static graphs or any representation derived from the initial sequence of discrete static graphs. \end{Definition}\noindent Previous work for temporal networks have focused on DTDNE methods as opposed to the class of CTDNE methods proposed in this work. Notice that DTDNE methods use \emph{approximations} of the actual dynamic network whereas the CTDN embeddings do not and leverage the actual valid temporal information without any temporal loss. In this experiment, we create discrete snapshot graphs and learn embeddings for each one using the previous approaches. As an example, suppose we have a sequence of $T=4$ snapshot graphs where each graph represents a day of activity and further suppose $D=128$. For each snapshot graph, we learn a $(D/T)$-dimensional embedding and concatenate them all to obtain a $D$-dimensional embedding and then evaluate the embedding for link prediction as described previously. \begin{table}[b!] \vspace{-8mm} \centering \small \fontsize{8}{9}\selectfont \renewcommand{\arraystretch}{1.10} \setlength{\tabcolsep}{2.0pt} \caption{Results Comparing DTDNEs to CTDNEs (AUC)} \label{table:link-pred-results-discrete-model} \vspace{-2.4mm} \begin{tabularx}{1.0\linewidth}{@{}r cc cc c @{}rH} \multicolumn{8}{p{1.0\linewidth}}{\footnotesize CTDNE-Unif uses uniform for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ whereas CTDNE-Opt selects the distributions via model learning (and hence corresponds to the best model). } \\ \toprule \textbf{Dynamic Network} && \textbf{DTDNE} && \textbf{CTDNE-Unif} \; & \textbf{CTDNE-Opt} & \;(\textsc{Gain}) \\ \midrule \text{ia-contact} && \textrm{0.843} && \text{0.913} & \textbf{0.921} & (\text{+8.30\%}) \\ \text{ia-hypertext} && 0.612 && \text{0.671} & \textbf{0.718} & (\text{+9.64\%}) \\ \text{ia-enron-employees} && 0.721 && \text{0.777} & \textbf{0.800} & (\text{+7.76\%}) \\ \text{ia-radoslaw-email} && 0.785 && \text{0.811} & \textbf{0.827} & (\text{+3.31\%}) \\ \bottomrule \multicolumn{8}{p{1.0\linewidth}}{\footnotesize $^{\star}$\textsc{Gain} = mean gain in AUC averaged over all embedding methods.} \\ \end{tabularx} \end{table} A challenging problem common with DTDNE methods is how to handle nodes that are not active in a given static snapshot graph $G_i$ (\emph{i.e.}, the node has no edges that occur in $G_i$). In such situations, we set the node embedding for that static snapshot graph to all zeros. However, we also investigated using the node embedding from the last active snapshot graph as well as setting the embedding of an inactive node to be the mean embedding of the active nodes in the given snapshot graph and observed similar results. More importantly, unlike DTDNE methods that have many issues and heuristics required to handle them (\emph{e.g.}, the time-scale, how to handle inactive nodes, etc), CTDNEs do not. CTDNEs also avoid many other issues~\cite{CTDNE} discussed previously that arise from DTDN embedding methods that use a sequence of discrete static snapshot graphs to approximate the actual dynamic network. For instance, it is challenging and unclear how to select the ``best'' most appropriate time-scale used to create the sequence of static snapshot graphs; and the actual time-scale is highly dependent on the temporal characteristics of the network and the underlying application. More importantly, all DTDNs (irregardless of the time-scale) are \emph{approximations} of the actual dynamic network. Thus, any DTDN embedding method is inherently lossy and is only as good as the discrete approximation of the CTDN (graph stream). Results are provided in Table~\ref{table:link-pred-results-discrete-model}. Since node2vec always performs the best among the baseline methods (Table~\ref{table:link-pred-results}), we use it as a basis for the DTDN embeddings. For brevity, we show results for each of the networks used previously in Table~\ref{table:variants-link-pred-results}. Overall, the proposed CTDNEs perform better than DTDNEs as shown in Table~\ref{table:link-pred-results-discrete-model}. Note that CTDNE in Table~\ref{table:link-pred-results-discrete-model} corresponds to using uniform for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$. Obviously, better results can be achieved by learning $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ automatically as shown in Table~\ref{table:variants-link-pred-results}. The gain in AUC for each graph is shown in the rightmost column in Table~\ref{table:link-pred-results-discrete-model}. The mean gain in AUC of CTDNE compared to DTDNE over all graphs is $7.25\%$. \definecolor{typeTwoColor}{RGB}{222,45,38} \definecolor{typeOneColor}{RGB}{49,130,189} \definecolor{typeThreeColor}{RGB}{77,172,38} \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[h!] \centering \begin{center} \scalebox{0.5}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=white,fill=typeOneColor,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, red node/.style={circle,draw=white,fill=typeTwoColor,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, white node/.style={circle,draw=white,fill=white,text=white,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, whitesmall node/.style={circle,fill=white,draw=white,minimum width=0.02cm,font=\sffamily\Large\bfseries}] \node[main node] (3) {}; \node[main node] (10) [left of=3, left=9mm] {}; \node[main node] (1) [below left of=3, left=2mm] {}; \node[main node] (4) [below right of=1, left=0.1mm] {}; \node[white node] (44) [below left of=1, left=4mm] {}; \node[white node] (444) [below left of=1, left=8mm, above=1mm] {}; \node[white node] (55) [above left of=1, left=4mm] {}; \node[white node] (66) [left of=1] {}; \node[red node] (2) [below right of=3] {$\mathbf{k}$}; \node[main node] (9) [below right of=2] {}; \node[red node] (5) [right of=2, right=5mm] {$\mathbf{i}$}; \node[red node] (6) [below right of=5, right=5mm] {$\mathbf{j}$}; \node[white node] (88) [above of=2, below=13mm, left=0mm] {}; \node[whitesmall node] (99) [below of=2, above=15mm, left=5mm] {}; \node[whitesmall node] (999) [left of=9] {}; \node[white node] (7) [left of=1] {$\mathbf{---}$}; \node[white node] (8) [right of=5] {$\mathbf{---}$}; \node[white node] (111) [below of=66, left=0mm] {}; \node[white node] (222) [below of=8, right=5mm] {}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\large \sffamily}] (10) edge [left] node [above left] {$\mathbf{t_1}$} (3) (1) edge [right] node [above left] {$\mathbf{t_2}$} (2) (4) edge [] node[anchor=center,below] {$\mathbf{t_3}$} (2) (9) edge [right] node[above left] {$\mathbf{t_6}$} (6) (55) edge [dashed, left] node[below left] {} (1) (44) edge [dashed, left] node[below left] {} (4) (44) edge [dashed, right] node[below right] {} (2) (66) edge [dashed, left] node[below left] {} (1) (99) edge [dashed] node[below=0pt] {} (9) (999) edge [dashed] node[below=0pt] {} (9) (3) edge[bend left] node[sloped,anchor=center,above] {$\mathbf{t_4}$} (5) (5) edge [line width=0.5mm] node[anchor=center,above] {\Large \sffamily \bf t} (6) (2) edge[right, line width=0.5mm] node[sloped,anchor=center,above] {$\mathbf{t_5}$} (5) (111) edge [thick,line width=1.5mm,draw=black, below right] node [below right] {\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \Large \bf time} (222); \end{tikzpicture} } \end{center} \vspace{-5mm} \caption{ Temporal Walks for Online CTDNEs. Given a new edge $(i, j,t)$ at time $t$, we immediately add it to the graph and then sample temporal walks ending at edge $(i,j)$ and use them to update the relevant embeddings. An example of a temporal walk is $k \!\! \rightarrow \! i \!\! \rightarrow \! j$ (red nodes). Note $t > t_6 > t_5 > t_4 > t_3 > t_2 > t_1$. In this example, $k$ and $j$ are the training instances. Hence, $\vz_i$ is updated every time $i$ is used in a temporal edge. } \label{fig:online-temporal-walk} \vspace{-5mm} \end{figure} \subsection{Incremental Learning of Node Embeddings} \label{sec:exp-online-learning} \noindent For some applications, it is important to incrementally learn and update embeddings from edges as soon as they arrive in a streaming fashion. In such a streaming online setting, we perform fast partial updates to obtain updated embeddings in real-time. Given an edge $(i,j,t)$ at time $t$, we simply obtain a few temporal walks ending at $(i,j)$ and use these to obtain the updated embeddings. An example is shown in Figure~\ref{fig:online-temporal-walk}. In these experiments, we use online SGD updates (online word2vec)~\cite{kaji2017incremental,peng2017incrementally,luo2015online,li2017psdvec} to incrementally learn the embeddings as new edges arrive. However, other incremental optimization schemes can be used as well (\emph{e.g.}, see~\cite{duchi2011adaptive,flaxman2005online,zhao2012fast,schraudolph2007stochastic,ge2015escaping,ying2008online}). We vary the number of temporal walks sampled for every new edge that arrives. Results are shown in Table~\ref{table:streaming-results}. Notably, it takes on average only a few milliseconds to update the embeddings across a wide variety of temporal network streams. These results are from a python implementation of the approach and thus the runtime to process a single edge in the stream can be significantly reduced even further using a C++ implementation of the incremental/online learning approach. \begin{table}[h!] \centering \renewcommand{\arraystretch}{1.15} \scriptsize \footnotesize \renewcommand{\arraystretch}{1.10} \setlength{\tabcolsep}{6.0pt} \caption{Streaming Online Network Embedding Results} \vspace{-2.4mm} \label{table:streaming-results} \begin{tabular}{r HH llH HH H ccc HH} \multicolumn{12}{p{0.9\linewidth}}{\footnotesize Average runtime (in milliseconds) per edge is reported. We vary the number of walks per new edge from 1 to 10. Recall $|E_T|$ = \# of \emph{temporal edges} and $\bar{d}$ = average temporal node degree. } \\ \toprule & & & & & & & & & \multicolumn{3}{c}{\bf $\mathbf{Time}$ (ms.)} \\ \cmidrule(l{3pt}r{3pt}){6-12} \textbf{Dynamic Network} & & & $|E_T|$ & $\bar{d}$ & & & & & $\mathbf{1}$ & $\mathbf{5}$ & $\mathbf{10}$ & \\ \midrule \text{ia-hypertext} & & & 20.8K & 368.5 & && && 2.769 & 3.721 & 4.927 \\ \text{fb-forum} & & & 33.7K & 75.0 & && && 2.875 & 3.412 & 4.230 \\ \text{soc-wiki-elec} & & & 107K & 30.1 & && && 2.788 & 3.182 & 3.813 \\ \text{ia-contact} & & & 28.2K & 206.2 & && && 2.968 & 4.490 & 6.119 \\ \text{ia-radoslaw-email} & & & 82.9K & 993.1 & && && 3.266 & 5.797 & 8.916 \\ \text{soc-bitcoinA} & & & 24.1K & 12.8 & && && 2.679 & 2.965 & 3.347 \\ \bottomrule \end{tabular} \vspace{-2mm} \end{table} \subsection{Discussion} \label{sec:exp-discussion} \noindent Recently, there has been a wide variety of works that are based on the key idea proposed in our shorter manuscript from early 2018~\cite{CTDNE}, which is to leverage temporal walks to extend existing embedding methods, \emph{e.g.}, see~\cite{huang2020temporal,node2bits-arxiv,kumar2019predicting,beres2019node,trivedi2018dyrep,sajjad2019efficient,heidari2020evolving}. This includes temporal walks based on either BFS and/or DFS. For temporal clarity, these works were not compared against or reviewed previously in detail. However, we briefly summarize some of these recent works. In particular, node2bits~\cite{node2bits-arxiv} used the idea of temporal walks to learn space-efficient dynamic embeddings for user stitching. There has been some work for temporal bipartite edge streams where an RNN-based model is proposed to embed users and items by leveraging the notion of a 1-hop temporal walk used in this work~\cite{kumar2019predicting}. Other work has used the proposed temporal walks to learn embeddings for tracking and measuring node similarity in edge streams~\cite{beres2019node}. More recently, some work has also used the proposed idea of leveraging temporal walks for embeddings to extend Graph Neural Networks (GNNs)~\cite{huang2020temporal}. In particular, these works use BFS-based temporal walks. Notably, all of these works are based on complex deep learning techniques that leverage temporal walks, yet they achieve comparable results on some problems. \section{Challenges \& Future Directions} \label{sec:discussion} \noindent\textbf{Attributed Networks \& Inductive Learning}: The proposed framework for learning \emph{dynamic node embeddings} can be easily generalized to \emph{attributed networks} and for \emph{inductive learning} tasks in temporal networks (graph streams) using the ideas introduced in~\cite{role2vec,ahmed17Gen-Deep-Graph-Learning}. More formally, the notion of attributed/feature-based walk (proposed in~\cite{role2vec,ahmed17Gen-Deep-Graph-Learning}) can be combined with the notion of temporal random walk as follows: \begin{Definition}[\sc Attributed Temporal Walk] \label{def:attr-temporal-random-walk} Let $\vx_i$ be a $d$-dimensional feature vector for node $v_i$. An attributed temporal walk $S$ of length $L$ is defined as a sequence of adjacent node feature-values $\phi(\vx_{i_{1}}), \phi(\vx_{i_{2}}),\ldots, \phi(\vx_{i_{L+1}})$ associated with a sequence of indices $i_{1}, i_{2}, \ldots, i_{L+1}$ such that {\smallskip \begin{compactenum} \item $(v_{i_{t}}, v_{i_{t+1}}) \in E_T$ for all $1 \leq t \leq L$ \item $\mathcal{T}(v_{i_{t}}, v_{i_{t+1}}) \leq \mathcal{T}(v_{i_{t+1}}, v_{i_{t+2}})$ for $1 \leq t < L$ \item $\phi : \vx \rightarrow y$ is a function that maps the input vector $\vx$ of a node to a corresponding feature-value $\phi(\vx)$. \end{compactenum}\noindent }\noindent The feature sequence $\phi(\vx_{i_{1}}), \phi(\vx_{i_{2}}),\ldots, \phi(\vx_{i_{L+1}})$ represents the feature-values that occur during a temporally valid walk, i.e., a walk they obeys the direction of time defined in (2). \end{Definition}\noindent Attributed temporal random walks can be either uniform (unbiased) or non-uniform (biased). Furthermore, the features used in attributed walks can be (i) intrinsic input attributes (such as profession, political affiliation), (ii) structural features derived from the graph topology (degree, triangles, etc; or even node embeddings from an arbitrary method), or both. Temporal attriuted walks can be sampled for every feature as done in~\cite{node2bits-arxiv}. In this case, $\phi : \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ and thus we have $d$ different feature-based walks for every temporal walk sampled. Suppose $\phi$ is the identity function, then for an arbitrary temporal walk $\lbrace(v_{i_{1}}, v_{i_{2}}, t_{i_{1}})$, $(v_{i_{2}},v_{i_{3}}, t_{i_{2}}), \ldots, (v_{i_{L}},$ $v_{i_{L+1}}, t_{i_{L}})\rbrace$ such that $t_{i_{1}} \leq t_{i_{2}} \leq \ldots \leq t_{i_{L}}$ we have the following $d$ attributed temporal walks (one per feature): \begin{align} \begin{matrix} X_{i_{1},1} & X_{i_{2},1} & \cdots & X_{i_{k},1} & \cdots \\ X_{i_{1},2} & X_{i_{2},2} & \cdots & X_{i_{k},2} & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ X_{i_{1},d} & X_{i_{2},d} & \cdots & X_{i_{k},d} & \cdots \\ \end{matrix} \end{align} A recent work called node2bits~\cite{node2bits-arxiv} leveraged this idea for learning inductive dynamic node embeddings and demonstrated its effectiveness compared to a variety of state-of-the-art methods. We refer the reader to~\cite{node2bits-arxiv} for detailed results and findings. \medskip\noindent\textbf{Other Types of Temporal Networks}: While this work naturally supports temporal networks and graph streams in general, there are many other networks with more specialized characteristics. For instance, some temporal networks (graph streams) contain edges with start and end times. Developing CTDNE methods for such temporal networks remains a challenge. Furthermore, another open and challenging problem that remains to be addressed is how to develop graph stream embedding techniques that require a fixed amount of space. Other applications may require dynamic node embedding methods that are space-efficient (\emph{e.g.}, by learning a sparse vector representation for each node). \medskip\noindent\textbf{Temporal Weighting and Bias}: This paper explored a number of temporal weighting and bias functions for decaying the weights of data that appears further in the past. More research is needed to fully understand the impact and to understand the types of temporal networks and characteristics that each should be used. Some early work has focused on temporally weighting the links, nodes, and attributes prior to learning embeddings~\cite{rossi2012dynamic-srl}. However, this idea has yet to be explored for learning general node embeddings and should be investigated in future work. Other research should investigate new temporal weighting schemes for links, nodes, and attributes~\cite{rossi2012dynamic-srl}. Furthermore, one can also incorporate a decay function for each temporal walk such that more temporal influence is given to recent nodes in the walk than to nodes in the distant past. Hence, each temporal walk is assigned a sequence of weights which can be incorporated into the Skip-Gram approach. For instance, in the case of an exponential decay function $\alpha^{t-1} \cdot \alpha^{t-2} \cdots \alpha^{t-k}$. However, there are many other ways to temporally weight or bias the walk and it is unclear when one approach works better than another. Future work should systematically investigate different approaches. \section{Conclusion} \label{sec:conc} \noindent In this work, we described a new class of embeddings based on the notion of temporal walks. This new class of embeddings are learned directly from the temporal network (graph stream) without having to approximate the edge stream as a sequence of discrete static snapshot graphs. As such these embeddings can be learned in an online fashion as they are naturally amenable to graph streams and incremental updates. We investigated a framework for learning such dynamic node embeddings using the notion of temporal walks. The proposed approach can be used as a basis for generalizing existing (or future state-of-the-art) random walk-based embedding methods for learning of dynamic node embeddings from dynamic networks (graph streams). The result is a more appropriate dynamic node embedding that captures the important temporal properties of the node in the continuous-time dynamic network. By learning dynamic node embeddings based on temporal walks, we avoid the issues and information loss that arise when time is ignored or approximated using a sequence of discrete static snapshot graphs. In contrast to previous work, the proposed class of embeddings are learned from temporally valid information. The experiments demonstrated the effectiveness of this new class of dynamic embeddings on several real-world networks. \makeatletter \IEEEtriggercmd{\reset@font\normalfont\fontsize{7.9pt}{8.40pt}\selectfont} \makeatother \IEEEtriggeratref{1} \section{Introduction} \label{sec:intro} \IEEEPARstart{D}{ynamic} networks are seemingly ubiquitous in the real-world. Such networks evolve over time with the addition, deletion, and changing of nodes and links. The temporal information in these networks is known to be important to accurately model, predict, and understand network data~\cite{watts1998collective,newman2001structure}. Despite the importance of these dynamics, most previous work on embedding methods have ignored the temporal information in network data~\cite{deepwalk,node2vec,line,grarep,deepGL,struc2vec,ASNE,ahmed17learning-attr-graphs,ComE,lee17-Deep-Graph-Attention}. \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[t!] \centering \begin{center} \subfigure[Graph (edge) stream]{ \scalebox{0.45}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, invis node/.style={circle,draw=white,fill=white,draw,font=\sffamily\Large\bfseries,text=black}] \node[main node] (2) at (0,0) {$\mathbf{v_2}$}; \node[main node] (1) [below of=2] {$\mathbf{v_1}$}; \node[main node] (3) [right of=2] at (-1,0) {$\mathbf{v_3}$}; \node[main node] (22) [below of=3]{$\mathbf{v_2}$}; \node[main node] (4) [right of=3] at (0.5,0) {$\mathbf{v_4}$}; \node[main node] (33) [below of=4]{$\mathbf{v_3}$}; \node[main node] (11) [right of=4] at (2,0) {$\mathbf{v_1}$}; \node[main node] (44) [below of=11]{$\mathbf{v_4}$}; \node[main node] (444) [right of=11] at (3.5,0) {$\mathbf{v_4}$}; \node[main node] (333) [below of=444]{$\mathbf{v_3}$}; \node[main node] (3333) [right of=444] at (5,0) {$\mathbf{v_3}$}; \node[main node] (5) [below of=3333]{$\mathbf{v_5}$}; \node[main node] (55) [right of=3333] at (6.5,0) {$\mathbf{v_5}$}; \node[main node] (222) [below of=55]{$\mathbf{v_2}$}; \node[main node] (33333) [right of=55] at (8,0) {$\mathbf{v_3}$}; \node[main node] (6) [below of=33333]{$\mathbf{v_6}$}; \node[invis node] (0) [right of=33333] at (9.3,0) {$\mathbf{}$}; \node[invis node] (00) [below of=0]{$\mathbf{}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [anchor=center, left] {1} (2) (22) edge [left] node [anchor=center, left] {2} (3) (33) edge [left] node [anchor=center, left] {3} (4) (44) edge [left] node [anchor=center, left] {4} (11) (333) edge [left] node [anchor=center, left] {5} (444) (5) edge [left] node [anchor=center, left] {7} (3333) (222) edge [left] node [anchor=center, left] {8} (55) (6) edge [left] node [anchor=center, left] {10} (33333) (00) edge [thick,line width=0mm,draw=white,left] node [anchor=center, left] {\Large \bf $\cdots$} (0); \end{tikzpicture} } } \subfigure[Continuous-Time Dynamic Network (CTDN)]{ \scalebox{0.5}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, white node/.style={circle,draw=white,fill=white,text=white,draw,font=\sffamily\Large\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \node[white node] (7) [left of=1] {$\mathbf{---}$}; \node[white node] (8) [right of=5] {$\mathbf{---}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {1} (3) (2) edge [right] node[below right] {3,5} (4) (4) edge [left] node[below left] {4} (1) (3) edge[bend left] node[sloped,anchor=center,above] {8} (5) (5) edge node[anchor=center,above] {7} (2) (6) edge node[sloped,anchor=center,below] {10} (2) (3) edge [right] node[above right] {2} (2); \end{tikzpicture} } } \end{center} \caption{ Dynamic network. Edges are labeled by time. Observe that existing methods that ignore time would consider $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ a \emph{valid} walk, however, $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ is clearly \emph{invalid with respect to time} since $v_1 \!\! \longrightarrow \! v_2$ exists in the past with respect to $v_4 \!\! \longrightarrow \! v_1$. In this work, we propose the notion of \emph{temporal random walks} for embeddings that capture the \emph{true temporally valid} behavior in networks. In addition, our approach naturally supports learning in \emph{graph streams} where edges arrive continuously over time (\emph{e.g.}, every second/millisecond) } \label{fig:info-loss-example} \end{figure} In this work, we address the problem of learning dynamic node embeddings directly from edge streams (\emph{i.e.}, \emph{continuous-time dynamic networks}) consisting of a sequence of timestamped edges at the finest temporal granularity for improving the accuracy of predictive models. We propose \emph{continuous-time dynamic network embeddings} (CTDNE) and describe a general framework for learning such embeddings based on the notion of \emph{temporal random walks} (walks that respect time). The framework is general with many interchangeable components and can be used in a straightforward fashion for incorporating temporal dependencies into existing node embedding and deep graph models that use random walks. Most importantly, the CTDNEs are learned from temporal random walks that represent actual \emph{temporally valid sequences} of node interactions and thus avoids the issues and information loss that arises when time is ignored~\cite{deepwalk,node2vec,line,grarep,deepGL,struc2vec,ASNE,ahmed17learning-attr-graphs,ComE,lee17-Deep-Graph-Attention} or approximated as a sequence of discrete static snapshot graphs~\cite{rossi2013dbmm-wsdm,hisano2016semi,kamra2017dgdmn,saha2018models,rahman2018dylink2vec} (Figure~\ref{fig:info-discrete-time-model-loss-example}) as done in previous work. The proposed approach (1) obeys the direction of time and (2) biases the random walks towards edges (and nodes) that are more recent and more frequent. The result is a more appropriate time-dependent network representation that captures the important temporal properties of the continuous-time dynamic network at the finest most natural temporal granularity without loss of information while using walks that are temporally valid (as opposed to walks that do not obey time and thus are invalid and noisy as they represent sequences that are impossible with respect to time). Hence, the framework allows existing embedding methods to be easily adapted for learning more appropriate network representations from continuous-time dynamic networks by ensuring time is respected and avoiding impossible sequences of events. The proposed framework learns more appropriate dynamic node embeddings directly from a stream of timestamped edges at the finest temporal granularity. In particular, this work proposes the use of temporal walks as a basis to learn temporally valid node embeddings that capture the important temporal dependencies of the network at the finest most natural granularity (\emph{e.g.}, at a time scale of seconds or milliseconds). This is in contrast to approximating the dynamic network as a sequence of static snapshot graphs $G_1,\ldots,G_t$ where each static snapshot graph represents all edges that occur between a user-specified discrete-time interval (\emph{e.g.}, day or week)~\cite{rossi2012dynamic-srl,soundarajan2016generating,sun2007graphscope}. Besides the obvious loss of information, there are many other issues such as selecting an appropriate aggregation granularity which is known to be an important and challenging problem in itself that can lead to poor predictive performance or misleading results. In addition, our approach naturally supports learning in \emph{graph streams} where edges arrive continuously over time (\emph{e.g.}, every second/millisecond)~\cite{aggarwal2011outlier,ahmed17streams,aggarwal2010dense,guha2012graph} and therefore can be used for a variety of applications requring real-time performance~\cite{pienta2015scalable,cai2012facilitating,ahmed2015interactive}. We demonstrate the effectiveness of the proposed framework and generalized dynamic network embedding method for temporal link prediction in several real-world networks from a variety of application domains. Overall, the proposed method achieves an average gain of $11.9\%$ across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations. In addition, any existing embedding method or deep graph model that use random walks can benefit from the proposed framework (\emph{e.g.},~\cite{deepwalk,node2vec,struc2vec,ComE,ASNE,dong2017metapath2vec,ahmed17learning-attr-graphs,lee17-Deep-Graph-Attention}) as it serves as a basis for incorporating important temporal dependencies into existing methods. Methods generalized by the framework are able to learn more meaningful and accurate time-dependent network embeddings that capture important properties from continuous-time dynamic networks. Previous embedding methods and deep graph models that use random walks search over the space of random walks $\mathbb{S}$ on $G$, whereas the class of models (continuous-time dynamic network embeddings) proposed in this work learn temporal embeddings by searching over the space $\mathbb{S}_{T}$ of temporal random walks that obey time and thus $\mathbb{S}_{T}$ includes only \emph{temporally valid walks}. See Figure~\ref{fig:space-of-random-walks} for intuition. Informally, a \emph{temporal walk} $S_t$ from node $v_{i_{1}}$ to node $v_{i_{L+1}}$ is defined as a sequence of edges $\lbrace(v_{i_{1}}, v_{i_{2}}, t_{i_{1}})$, $(v_{i_{2}},v_{i_{3}}, t_{i_{2}}), \ldots, (v_{i_{L}},$ $v_{i_{L+1}}, t_{i_{L}})\rbrace$ such that $t_{i_{1}} \leq t_{i_{2}} \leq \ldots \leq t_{i_{L}}$. A temporal walk represents a \emph{temporally valid} sequence of edges traversed in increasing order of edge times and therefore respect time. For instance, suppose each edge represents a contact (\emph{e.g.}, email, phone call, proximity) between two entities, then a temporal random walk represents a feasible route for a piece of information through the dynamic network. It is straightforward to see that existing methods that ignore time learn embeddings from a set of random walks that are not actually possible when time is respected and thus represent invalid sequences of events. There is only a small overlap between $\mathbb{S}_T$ and $\mathbb{S}_D$ as shown in Figure~\ref{fig:space-of-random-walks} since only a small fraction of the space of walks in $\mathbb{S}_D$ are actually time-respecting (valid temporal walks). \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[b!] \vspace{-5mm} \centering \begin{center} \subfigure[Static graph ignoring time]{ \label{fig:static-graph-example} \scalebox{0.46}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {} (3) (2) edge [left] node[below right] {} (4) (4) edge [left] node[below left] {} (1) (3) edge[bend left] node[sloped,anchor=center,above] {} (5) (5) edge node[anchor=center,above] {} (2) (6) edge node[sloped,anchor=center,below] {} (2) (3) edge [right] node[above right] {} (2); \end{tikzpicture} } } \tikzstyle{background-page}=[rectangle, fill=gray!25, inner sep=0.5cm, rounded corners=5mm] \subfigure[Discrete-Time Dynamic Network (DTDN)] {\label{fig:DTND-example} \begin{minipage}[t]{0.43\linewidth} \scalebox{0.42}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\LARGE\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (1) edge [left] node [above left] {} (3) (2) edge [right] node[below right] {} (4) (4) edge [left] node[below left] {} (1) (3) edge [right] node[above right] {} (2); \begin{pgfonlayer}{background} \node [background-page, fit=(3) (1) (4) (2) (5) (6), label=below:\fontsize{18}{20}\selectfont $G_1$ ] {}; \end{pgfonlayer} \end{tikzpicture} } \end{minipage} \hspace{2mm} \begin{minipage}[t]{0.43\linewidth} \scalebox{0.42}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick,main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\LARGE\bfseries}] \node[main node] (3) {$\mathbf{v_2}$}; \node[main node] (1) [below left of=3] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_4}$}; \node[main node] (2) [below right of=3] {$\mathbf{v_3}$}; \node[main node] (5) [right of=2] {$\mathbf{v_5}$}; \node[main node] (6) [below right of=2] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge[bend left] node[sloped,anchor=center,above] {} (5) (5) edge node[anchor=center,above] {} (2) (6) edge node[sloped,anchor=center,below] {} (2); \begin{pgfonlayer}{background} \node [background-page, fit=(3) (1) (4) (2) (5) (6), label=below:\fontsize{18}{20}\selectfont $G_2$] {}; \end{pgfonlayer} \end{tikzpicture} } \end{minipage} } \end{center} \vspace{-4mm} \caption{Representing the continuous-time dynamic network as a static graph or discrete-time dynamic network (DTDN). Noise and information loss occurs when the true dynamic network (Figure~\ref{fig:info-loss-example}) is approximated as a sequence of discrete static snapshot graphs $G_1,\ldots,G_t$ using a user-defined aggregation time-scale $s$ (temporal granularity). Suppose the dynamic network in Figure~\ref{fig:info-loss-example} is used and $s=5$, then $G_1$ includes all edges in the time-interval $[1,5]$ whereas $G_2$ includes all edges in $[6,10]$ and so on. Notice that in the static snapshot graph $G_1$ the walk $v_4 \!\! \longrightarrow \! v_1 \!\! \longrightarrow \! v_2$ is still possible despite it being \emph{invalid} while the perfectly valid temporal walk $v_1 \!\! \longrightarrow \! v_2 \!\! \longrightarrow \! v_5$ is impossible. Both cases are captured correctly without any loss using the notion of \emph{temporal walk} on the actual dynamic network. } \label{fig:info-discrete-time-model-loss-example} \end{figure} The sequence that links (events) occur in a network carries important information, \emph{e.g.}, if the event (link) represents an email communication sent from one user to another, then the state of the user who receives the email message changes in response to the email communication. For instance, suppose we have two emails $e_i = (v_1,v_2)$ from $v_1$ to $v_2$ and $e_j=(v_2,v_3)$ from $v_2$ to $v_3$; and let $\mathcal{T}(v_1,v_2)$ be the time of an email $e_i = (v_1,v_2)$. If $\mathcal{T}(v_1,v_2) < \mathcal{T}(v_2,v_3)$ then the message $e_j = (v_2,v_3)$ may reflect the information received from the email communication $e_i=(v_1,v_2)$. However, if $\mathcal{T}(v_1,v_2) > \mathcal{T}(v_2,v_3)$ then the message $e_j = (v_2,v_3)$ cannot contain any information communicated in the email $e_i=(v_1,v_2)$. This is just one simple example illustrating the importance of modeling the actual sequence of events (email communications). Embedding methods that ignore time are prone to many issues such as learning inappropriate node embeddings that do not accurately capture the dynamics in the network such as the real-world interactions or flow of information among nodes. An example of information loss that occurs when time is ignored or the actual dynamic network is approximated using a sequence of discrete static snapshot graphs is shown in Figure~\ref{fig:info-loss-example} and~\ref{fig:info-discrete-time-model-loss-example}, respectively. This is true for networks that involve the flow or diffusion of information through a network~\cite{lerman2010information,acemoglu2010spread,rossi2012dpr-dynamical}, networks modeling the spread of disease/infection~\cite{infect}, spread of influence in social networks (with applications to product adoption, viral marketing)~\cite{java2006modeling,domingos2005mining}, or more generally any type of dynamical system or diffusion process over a network~\cite{lerman2010information,acemoglu2010spread,rossi2012dpr-dynamical}. The proposed approach naturally supports generating dynamic node embeddings for any pair of nodes at a specific time $t$. More specifically, given a newly arrived edge between node $i$ and $j$ at time $t$, we simply add the edge to the graph, perform a number of temporal random walks that contain those nodes, and then update the embedding vectors for those nodes (via a partial fast update) using only those walks. In this case, there is obviously no need to recompute the embedding vectors for all such nodes in the graph as the update is very minor and an online partial update can be performed fast. This includes the case where either node in the new edge has never been seen previously. The above is a special case of our framework and is a trivial modification. Notice that we can also obviously drop-out past edges as they may become stale. \medskip\noindent\textbf{Summary of Main Contributions:} This work makes three main contributions. First, we described a new class of embeddings based on the notion of \emph{temporal walks}. This notion can be used in a straightforward fashion to adapt other existing and/or future state-of-the-art methods for learning embeddings from temporal networks (graph streams). Second, unlike previous work that learn embeddings using an approximation of the actual dynamic network (\emph{i.e.}, sequence of static graphs), we describe a new class of embeddings called \emph{continuous-time dynamic network embeddings} (CTDNE) that are learned directly from the graph stream. CTDNEs avoid the issues and information loss that arise when time is ignored or the dynamic network (graph stream) is approximated as a sequence of discrete static snapshot graphs. This new class of embeddings leverage the notion of \emph{temporal walks} that captures the \emph{temporally valid interactions} (\emph{e.g.}, flow of information, spread of diseases) in the dynamic network (graph stream) in a lossless fashion. As an aside, since these embeddings are learned directly from the graph stream at the finest granularity, they can also be learned in an online fashion, \emph{i.e.}, node embeddings are updated after every new edge (or batch of edges). Finally, we describe a framework for learning them based on the notion of \emph{temporal walks}. The proposed framework provides a basis for generalizing existing (or future state-of-the-art) embedding methods that use the traditional notion of random walks over static or discrete approximation of the actual dynamic network. \newcommand{\subsec}[1]{\medskip\noindent\textbf{#1:}\;} \section{Related work} \label{sec:related-work} \noindent \subsec{Representation Learning in Static Networks} The node embedding problem has received considerable attention from the research community in recent years.\footnote{In the time between our shorter CTDNE paper from early 2018~\cite{CTDNE} and this papers original submission, there have been a number of closely related follow-up works. For temporal clarity, these works are not reviewed or compared against in detail.} See~\cite{rossi12jair} for an early survey on representation learning in relational/graph data. The goal is to learn encodings (embeddings, representations, features) that capture key properties about each node such as their role in the graph based on their structural characteristics (\emph{i.e.}, roles capture distinct structural properties, \emph{e.g.}, hub nodes, bridge nodes, near-cliques)~\cite{rossi2014roles} or community (\emph{i.e.}, communities represent groups of nodes that are close together in the graph based on proximity, cohesive/tightly connected nodes)~\cite{ng2002spectral,pons2006computing}. Since nodes that share similar roles (based on structural properties) or communities (based on proximity, cohesiveness) are grouped close to each other in the embedding space, one can easily use the learned embeddings for tasks such as ranking~\cite{page1998pagerank}, community detection~\cite{ng2002spectral,pons2006computing}, role embeddings~\cite{rossi2014roles,ahmed2017edgeroles}, link prediction~\cite{liu2010link}, and node classification~\cite{rossi2012dynamic-srl}. Many of the techniques that were initially proposed for solving the node embedding problem were based on graph factorization~\cite{ahmedWWW13,Belkin02laplacianeigenmaps,grarep}. More recently, the skip-gram model~\cite{skipgram-old} was introduced in the natural language processing domain to learn vector representations for words. Inspired by skip-gram's success in language modeling, various methods~\cite{deepwalk,node2vec,line} have been proposed to learn node embeddings using skip-gram by treating a graph as a ``document." Two of the more notable methods, DeepWalk~\cite{deepwalk} and node2vec~\cite{node2vec}, use random walks to sample an ordered sequence of nodes from a graph. The skip-gram model can then be applied to these sequences to learn node embeddings. \subsec{Representation Learning in Dynamic Networks} Researchers have also tackled the problem of node embedding in more complex graphs, including attributed networks~\cite{ASNE}, heterogeneous networks~\cite{dong2017metapath2vec} and dynamic networks~\cite{rossi2013dbmm-wsdm,zhou2018dynamic,li2017attributed}. However, the majority of the work in the area still fail to consider graphs that evolve over time (\emph{i.e.} temporal graphs). A few work have begun to explore the problem of learning node embeddings from temporal networks~\cite{rossi2013dbmm-wsdm,hisano2016semi,kamra2017dgdmn, zhu2016scalable,saha2018models,rahman2018dylink2vec}. All of these approaches \emph{approximate} the dynamic network as a sequence of discrete static snapshot graphs, which are fundamentally different from the class of continuous-time dynamic network embedding methods introduced in this work. Notably, this work is the first to propose \emph{temporal random walks} for embeddings as well as \emph{CTDN embeddings} that use temporal walks to capture the actual temporally valid sequences observed in the CTDN; and thus avoids the issues and information loss that arises when embedding methods simply ignore time or use discrete static snapshot graphs (See Figure~\ref{fig:info-discrete-time-model-loss-example} for one example). Furthermore, we introduce a unifying framework that can serve as a basis for generalizing other random walk based deep learning (\emph{e.g.},~\cite{lee17-Deep-Graph-Attention}) and embedding methods (\emph{e.g.},~\cite{struc2vec,node2vec,ComE,ASNE,dong2017metapath2vec,hamilton2017inductive}) for learning more appropriate time-dependent embeddings from temporal networks. In contrast, previous work has simply introduced new approaches for temporal networks~\cite{hisano2016semi} and therefore they focus on an entirely different problem than the one in this work which is a general framework that can be leveraged by other non-temporal approaches. Temporal graph smoothing of a sequence discrete static snapshot graphs was proposed for classification in dynamic networks~\cite{rossi2012dynamic-srl}. The same approach has also been used for deriving role-based embeddings from dynamic networks~\cite{rossi2012role-www,rossi2013dbmm-wsdm}. More recently, these techniques have been used to derive more meaningful embeddings from a sequence of discrete static snapshot graphs~\cite{bonner2018temporal,singer2019node,saha2018models,rahman2018dylink2vec}. All of these approaches model the dynamic network as a sequence of discrete static snapshot graphs, which is fundamentally different from the class of continuous-time dynamic network embedding methods introduced in this work. Table~\ref{table:qual-comp} provides a qualitative comparison of CTDNE methods to existing static methods or DTDNE methods that approximate the dynamic network as a discrete sequence of static snapshot graphs. \begin{table}[t!] \centering \renewcommand{\arraystretch}{1.10} \caption{Comparison of Different Classes of Embedding Methods} \label{table:qual-comp} \vspace{-2.5mm} \footnotesize \setlength{\tabcolsep}{2.9pt} \begin{tabularx}{1.0\linewidth}{l@{} cc ccc cHH @{}} \multicolumn{8}{@{}p{1.0\linewidth}}{ \scriptsize Comparison of CTDNE methods to existing methods categorized as either static methods (that ignore time) or DTDNE methods that approximate the actual dynamic network using a sequence of discrete static snapshot graphs. Does the method: use the actual dynamic network at the finest temporal granularity, \emph{e.g.}, seconds or ms (or do they use discrete static approximations of the dynamic network); temporally valid; use temporal bias/smoothing functions to give more importance to recent or temporally recurring information; and does it naturally support graph streams and the streaming/online setting in general where data is continuously arriving over time and embeddings can be incrementally updated in an online fashion. }\\ \toprule & {\footnotesize \bf Temporally } & & {\footnotesize \bf Finest} & {\footnotesize \bf Temporal} &&& \\ & {\footnotesize \bf valid} & & {\footnotesize \bf granularity} & {\footnotesize \bf bias/smoothing} && {\footnotesize \bf Streaming} && \\ \midrule \textsf{Static} & \ding{55} & & \ding{55} & \ding{55} && \ding{55} & \\ \textsf{DTDNE} & \ding{55} & & \ding{55} & \ding{51} && \ding{55} & \\ \textsf{CTDNE} & \ding{51} & & \ding{51} & \ding{51} && \ding{51} & \\ \bottomrule \end{tabularx} \vspace{-2mm} \end{table} \subsec{Temporal Networks} More recently, there has been significant research in developing network analysis and machine learning methods for modeling temporal networks. Temporal networks have been the focus of recent research including node classification in temporal networks~\cite{rossi2012dynamic-srl}, temporal link prediction~\cite{dunlavy2011temporal}, dynamic community detection~\cite{cazabet2014dynamic}, dynamic mixed-membership role models~\cite{fu2009dynamic,rossi2012role-www,rossi2013dbmm-wsdm}, anomaly detection in dynamic networks~\cite{ranshous2015dynamic-net-anomaly-survey}, influence modeling and online advertisement~\cite{goyal2010learning}, finding important entities in dynamic networks~\cite{rossi2012dpr-dynamical,OMadadhain2005}, and temporal network centrality and measures~\cite{holme2012temporal,beres2018temporal}. \subsec{Random Walks} Random walks on graphs have been studied for decades~\cite{lovasz1993random}. The theory underlying random walks and their connection to eigenvalues and other fundamental properties of graphs are well-understood~\cite{chung2007random}. Our work is also related to uniform and non-uniform random walks on graphs~\cite{lovasz1993random,chung2007random}. Random walks are at the heart of many important applications such as ranking~\cite{page1998pagerank}, community detection~\cite{ng2002spectral,pons2006computing}, recommendation~\cite{bogers2010movie}, link prediction~\cite{liu2010link}, and influence modeling~\cite{java2006modeling}. search engines~\cite{lassez:latentlinks}, image segmentation~\cite{grady2006random}, routing in wireless sensor networks~\cite{servetto2002constrained}, and time-series forecasting~\cite{rossi2012dpr-dynamical}. These applications and techniques may also benefit from the proposed class of embeddings that are based on the notion of \emph{temporal random walks}. Recently, Ahmed~\emph{et al.}\xspace~\cite{ahmed17attrRandomWalks} proposed the notion of \emph{attributed random walks} that can be used to generalize existing methods for inductive learning and/or graph-based transfer learning tasks. In future work, we will investigate combining both attributed random walks and temporal random walks~\cite{tremblay2001temporal} to derive even more powerful embeddings. \section{Continuous-Time Dynamic Embeddings} \label{sec:streaming-network-embeddings} \noindent While previous work uses discrete approximations of the dynamic network (\emph{i.e.}, a sequence of discrete static snapshot graphs), this paper proposes an entirely new direction that instead focuses on learning embeddings directly from the graph stream using only temporally valid information. In this work, instead of approximating the dynamic network as a sequence of discrete static snapshot graphs defined as $G_1, \ldots, G_T$ where $G_i=(V, E_t)$ and $E_t$ are the edges active between the timespan $[t_{i-1},t_i]$, we model the \emph{temporal interactions} in a lossless fashion as a \emph{continuous-time dynamic network} (CTDN) defined formally as: \begin{Definition}[\sc Continuous-Time Dynamic Network] \label{eq:cont-time-dynamic-network} Given a graph $G=(V,E_T,\mathcal{T})$, let $V$ be a set of vertices, and $E_T \subseteq V \times V \times \RR^{+}$ be a set of temporal edges between vertices in $V$, and $\mathcal{T} : E \rightarrow \RR^{+}$ is a function that maps each edge to a corresponding timestamp. At the finest granularity, each edge $e_i = (u,v,t) \in E_T$ may be assigned a unique time $t \in \RR^{+}$. \end{Definition}\noindent In continuous-time dynamic networks (\emph{i.e.}, temporal networks, graph streams)~\cite{holme2012temporal}, edges occur over a time span $\mathcal{T} \subseteq \mathbb{T}$ where $\mathbb{T}$ is the temporal domain.\footnote{The terms temporal network, graph stream, and continuous-time dynamic network are used interchangeably.} For continuous-time systems $\mathbb{T}=\RR^{+}$. In such networks, a \emph{valid} walk is defined as a sequence of nodes connected by edges with non-decreasing timestamps~\cite{nicosia2013graph}. In other words, if each edge captures the time of contact between two entities, then a (valid temporal) walk may represent a feasible route for a piece of information. More formally, \begin{Definition}[\sc Temporal Walk]\label{def:temporal-walk} A temporal walk from $v_1$ to $v_k$ in $G$ is a sequence of vertices $\langle v_1, v_2, \cdots, v_k \rangle$ such that $\langle v_i, v_{i+1} \rangle \in E_T$ for $1 \leq i < k$, and $\mathcal{T}(v_i, v_{i+1}) \leq \mathcal{T}(v_{i+1}, v_{i+2})$ for $1 \leq i < (k-1)$. For two arbitrary vertices $u$, $v \in V$, we say that $u$ is \textit{temporally connected} to $v$ if there exists a temporal walk from $u$ to $v$. \end{Definition} \noindent The definition of temporal walk echoes the standard definition of a walk in static graphs but with an additional constraint that requires the walk to respect time, that is, edges must be traversed in increasing order of edge times. As such, temporal walks are naturally asymmetric~\cite{xuan2003computing,ferreira2007evaluation,tremblay2001temporal}. Modeling the dynamic network in a continuous fashion makes it completely trivial to add or remove edges and nodes. For instance, suppose we have a new edge $(v,u,t)$ at time $t$, then we can sample a small number of temporal walks ending in $(v,u)$ and perform a fast partial update to obtain the updated embeddings (See Section~\ref{sec:time-preserving-embeddings} for more details). This is another advantage to our approach compared to previous work that use discrete static snapshot graphs to approximate the dynamic network. Note that performing a temporal walk forward through time is equivalent to one backward through time. However, for the streaming case (online learning of the embeddings) where we receive an edge $(v,u,t)$ at time $t$, then we sample a temporal walk backward through time. A \emph{temporally invalid walk} is a walk that does not respect time. Any method that uses temporally invalid walks or approximates the dynamic network as a sequence of static snapshot graphs is said to have \emph{temporal loss}. \begin{figure}[h!] \vspace{0mm} \centering \begin{center} \scalebox{0.85}{ \begin{tikzpicture} \begin{scope}[blend group = soft light] \fill[gray!70] ( 90:1.5) circle (2); \fill[gray!50] (110:1.8) circle (0.8); \fill[gray!70] (75:1.5) circle (0.4); \end{scope} \node [font=\Large] {\fontsize{18}{20}\selectfont $\mathbb{S}$}; \node at ( 110:1.8) {\fontsize{15}{17}\selectfont $\mathbb{S}_D$}; \node at ( 75:1.5) {\fontsize{15}{17}\selectfont $\mathbb{S}_T$}; \end{tikzpicture} } \end{center} \vspace{-3mm} \caption{ Space of all possible random walks $\mathbb{S}$ (up to a fixed length $L$) including (i) the space of temporal (time-obeying) random walks denoted as $\mathbb{S}_T$ that capture the temporally valid flow of information (or disease, etc.) in the network without any loss and (ii) the space of random walks that are possible when the dynamic network is approximated as a sequence of discrete static snapshot graphs denoted as $\mathbb{S}_{D}$. Notably, there is a very small overlap between $\mathbb{S}_T$ and $\mathbb{S}_D$ since only a small fraction of the walks in $\mathbb{S}_D$ are actually time-respecting (valid temporal walks). } \label{fig:space-of-random-walks} \vspace{0mm} \end{figure} We define a new type of embedding for dynamic networks (graph streams) called continuous-time dynamic network embedding (CTDNEs). \begin{Definition}[\sc Continuous-Time Dynamic Network Embedding]\label{def:ctdne-problem} Given a dynamic network $G=(V,E_T,\mathcal{T})$ (graph stream), the goal is to learn a function $f : V \rightarrow \RR^{D}$ that maps nodes in the continuous-time dynamic network (graph stream) $G$ to $D$-dimensional time-dependent embeddings using only data that is temporally valid (\emph{e.g.}, temporal walks defined in Definition~\ref{def:temporal-walk}). \end{Definition}\noindent Unlike previous work that ignores time or \emph{approximates} the dynamic network as a sequence of discrete static snapshot graphs $G_1, \ldots, G_t$, CTDNEs proposed in this work are learned from temporal random walks that capture the true temporal interactions (\emph{e.g.}, flow of information, spread of diseases, etc.) in the dynamic network in a lossless fashion. CTDNEs (or simply dynamic node embeddings) can be learned incrementally or in a streaming fashion where embeddings are updated in real-time as new edges arrive. For this new class of dynamic node embeddings, we describe a general framework for learning such temporally valid embeddings from the graph stream in Section~\ref{sec:framework}. \section{Framework} \label{sec:framework} \noindent While Section~\ref{sec:streaming-network-embeddings} formally introduced the new class of embeddings investigated in this work, this section describes a general framework for deriving them based on the notion of $\text{\emph{temporal walks}}$. The framework has two main interchangeable components that can be used to \emph{temporally bias} the learning of the dynamic node embeddings. We describe each component in Section~\ref{sec:selection-of-start-time} and~\ref{sec:temporal-random-walk}. In particular, the CTDNE framework generates \emph{(un)biased temporal random walks} from CTDNs that are then used in Section~\ref{sec:time-preserving-embeddings} for deriving time-dependent embeddings that are learned from temporally valid node sequences that capture in a lossless fashion the actual flow of information or spread of disease in a network. It is straightforward to use the CTDNE framework for temporal networks where edges are active only for a specified time-period. \begin{figure}[h!] \vspace{-2mm} \centering \hspace{-4mm} \includegraphics[width=0.6\linewidth]{fig2-eps-converted-to.pdf} \caption{ Example initial edge selection cumulative probability distributions (CPDs) for each of the variants investigated (uniform, linear, and exponential). Observe that exponential biases the selection of the initial edge towards those occurring more recently than in the past, whereas linear lies between exponential and uniform. } \label{fig-initial-edge-selection-fb-forum} \end{figure} \subsection{Initial Temporal Edge Selection} \label{sec:selection-of-start-time} This section describes approaches to temporally bias the temporal random walks by selecting the initial temporal edge to begin the temporal random walk. In general, each temporal walk starts from a temporal edge $e_i \in E_T$ at time $t=\mathcal{T}$ selected from a distribution $\mathbb{F}_s$. The distribution used to select the initial temporal edge can either be uniform in which case there is no bias or the selection can be temporally biased using an arbitrary weighted (non-uniform) distribution for $\mathbb{F}_s$. For instance, to learn node embeddings for the temporal link prediction task, we may want to begin more temporal walks from edges closer to the current time point as the events/relationships in the distant past may be less predictive or indicative of the state of the system now. Selecting the initial temporal edge in an unbiased fashion is discussed in Section~\ref{sec:selection-of-start-time-unbiased} whereas strategies that temporally bias the selection of the initial edge are discussed in Section~\ref{sec:selection-of-start-time-biased}. In the case of learning CTDNEs in an online fashion, we do not need to select the initial edge since we simply sample a number of temporal walks that end at the new edge. See Section~\ref{sec:time-preserving-embeddings} for more details on learning CTDNEs in an online fashion. \subsubsection{Unbiased} \label{sec:selection-of-start-time-unbiased} In the case of initial edge selection, each edge $e_i=(v,u,t) \in E_T$ has the same probability of being selected: \begin{equation}\label{eq:uniform-edge} \Pr(e) = 1 / |E_T| \end{equation}\noindent This corresponds to selecting the initial temporal edge using a uniform distribution. \subsubsection{Biased} \label{sec:selection-of-start-time-biased} We describe two techniques to temporally bias the selection of the initial edge that determines the start of the temporal random walk. In particular, we select the initial temporal edge using a temporally weighted distribution based on exponential and linear functions. However, the proposed continuous-time dynamic network embedding framework is flexible with many interchangeable components and therefore can easily support other temporally weighted distributions for selecting the initial temporal edge. \medskip\noindent\textbf{Exponential:} We can also bias initial edge selection using an exponential distribution, in which case each edge $e \in E_T$ is assigned the probability: \begin{equation}\label{eq:exponential-dist} \Pr(e) = \frac{\exp\big[ \mathcal{T}(e)-t_{\min}]}{\sum_{e^\prime \in E_T} \, \exp\big[ \mathcal{T}(e^\prime)-t_{\min}]} \end{equation}\noindent where $t_{\min}$ is the minimum time associated with an edge in the dynamic graph. This defines a distribution that heavily favors edges appearing later in time. \medskip\noindent\textbf{Linear:} When the time difference between two time-wise consecutive edges is large, it can sometimes be helpful to map the edges to discrete time steps. Let $\eta : E_T \rightarrow \mathbb{Z}^{+}$ be a function that sorts (in ascending order by time) the edges in the graph. In other words $\eta$ maps each edge to an index with $\eta(e) = 1$ for the earliest edge $e$. In this case, each edge $e \in \eta(E_T)$ will be assigned the probability: \begin{equation}\label{eq:linear-dist} \Pr(e) = \frac{\eta(e)}{\sum_{e^\prime \in E_T} \eta(e^\prime)} \end{equation}\noindent \textcolor{red}{ See Figure~\ref{fig-initial-edge-selection-fb-forum} for examples of the uniform, linear, and exponential variants. } \subsection{Temporal Random Walks} \label{sec:temporal-random-walk} \noindent After selecting the initial edge $e_i = (u, v, t)$ at time $t$ to begin the temporal random walk (Section~\ref{sec:selection-of-start-time}) using $\mathbb{F}_s$, how can we perform a temporal random walk starting from that edge? We define the set of temporal neighbors of a node $v$ at time $t$ as follows: \begin{Definition}[\sc Temporal Neighborhood]\label{def:temporal-neighbor} The set of temporal neighbors of a node $v$ at time $t$ denoted as $\Gamma_t(v)$ are: \begin{equation}\label{eq:potential-neighbors-at-time-t} \Gamma_t(v) = \big\{(w, t^\prime) \,\, | \,\, e=(v,w, t^\prime) \in E_T \, \wedge \mathcal{T}(e) > t \big\} \end{equation} \end{Definition} \noindent Observe that the same neighbor $w$ can appear multiple times in $\Gamma_t(v)$ since multiple temporal edges can exist between the same pair of nodes. See Figure~\ref{fig:temporal-neighbors} for an example. The next node in a temporal random walk can then be chosen from the set $\Gamma_t(v)$. Here we use a second distribution $\mathbb{F}_\Gamma$ to \emph{temporally bias} the neighbor selection. Again, this distribution can either be uniform, in which case no bias is applied, or more intuitively biased to consider time. For instance, we may want to bias the sampling strategy towards walks that exhibit smaller ``in-between" time for consecutive edges. That is, for each consecutive pair of edges $(u, v, t)$, and $(v, w, t+k)$ in the random walk, we want $k$ to be small. For temporal link prediction on a dynamic social network, restricting the ``in-between" time allows us to sample walks that do not group friends from different time periods together. As an example, if $k$ is small we are likely to sample the random walk sequence $(v_1, v_2, t), (v_2, v_3, t+k)$ which makes sense as $v_1$ and $v_3$ are more likely to know each other since $v_2$ has interacted with them both recently. On the other hand, if $k$ is large we are unlikely to sample the sequence. This helps to separate people that $v_2$ interacted with during very different time periods (\textit{e.g.} high-school and graduate school) as they are less likely to know each other. \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[h!] \centering \begin{center} \subfigure[Neighborhood $\Gamma(v_2)$] {\label{fig:neighborhood-example} \scalebox{0.55}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, inactive node/.style={circle,draw=gray!150,fill=white,draw,font=\sffamily\Large\bfseries,text=gray!150}] \node[main node] (2) {$\mathbf{v_3}$}; \node[main node] (1) [below left of=2] {$\mathbf{v_2}$}; \node[main node] (3) [left of=1] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_5}$}; \node[main node] (5) [right of=1] {$\mathbf{v_4}$}; \node[main node] (6) [above left of=1] {$\mathbf{v_8}$}; \node[main node] (8) [below left of=1] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge [thick,line width=0.6mm,left] node [above left] {\textbf{t=6}} (1) (1) edge [right] node[above right] {} (6) (1) edge [right] node[above left] {} (8) (1) edge [right] node[above right] {} (5) (1) edge [left] node[below left] {} (4) (1) edge [right] node[above left] {} (2); \end{tikzpicture} } } \hspace{4mm} \subfigure[Temporal neigh. $\Gamma_{t}(v_2)$] {\label{fig:temporal-neighborhood-example} \scalebox{0.55}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=thelightblue,fill=white,draw,font=\sffamily\Large\bfseries}, inactive node/.style={circle,draw=gray!150,fill=white,draw,font=\sffamily\Large\bfseries,text=gray!150}] \node[main node] (2) {$\mathbf{v_3}$}; \node[main node] (1) [below left of=2] {$\mathbf{v_2}$}; \node[main node] (3) [left of=1] {$\mathbf{v_1}$}; \node[main node] (4) [below right of=1] {$\mathbf{v_5}$}; \node[main node] (5) [right of=1] {$\mathbf{v_4}$}; \node[inactive node] (6) [above left of=1] {$\mathbf{v_8}$}; \node[inactive node] (8) [below left of=1] {$\mathbf{v_6}$}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\sffamily}] (3) edge [thick,line width=0.6mm,left] node [above left] {\textbf{t=6}} (1) (1) edge [draw=gray!150,text=black, dashed,right] node[above right] {4} (6) (1) edge [draw=gray!150,text=black, dashed,right] node[above left] {1} (8) (1) edge [right] node[above right] {7} (5) (1) edge [left] node[below left] {9} (4) (1) edge [right] node[above left] {8,10} (2); \end{tikzpicture} } } \end{center} \vspace{-4mm} \caption{ Temporal neighborhood of a node $v_2$ at time $t=6$ denoted as $\Gamma_t(v_2)$. Notice that $\Gamma_t(v_2) = \{v_4, v_3, v_5, v_3\}$ is an ordered multiset where the temporal neighbors are sorted in ascending order by time with the nodes more recent appearing first. Moreover, the same node can appear multiple times (\emph{e.g.}, a user sends another user multiple emails, or an association/event occurs multiple times between the same entities). This is in contrast to the definition of neighborhood used by previous work that is not parameterized by time, \emph{e.g.}, $\Gamma(v_2) = \{v_3, v_4, v_5, v_6, v_8\}$ or $\Gamma(v_2) = \{v_3, v_3, v_4, v_5, v_6, v_8\}$ if multigraphs are supported. } \label{fig:temporal-neighbors} \vspace{-2mm} \end{figure} \subsubsection{Unbiased} \label{sec:temporal-random-walk-unbiased} For unbiased temporal neighbor selection, given an arbitrary edge $e = (u, v, t)$, each temporal neighbor $w \in \Gamma_t(v)$ of node $v$ at time $t$ has the following probability of being selected: \begin{equation}\label{eq:uniform-neighbor} \Pr(w) = 1 / |\Gamma_t(v)| \end{equation}\noindent \subsubsection{Biased} \label{sec:temporal-random-walk-biased} We describe two techniques to bias the temporal random walks by sampling the next node in a temporal walk via temporally weighted distributions based on exponential and linear functions. However, the continuous-time dynamic network embedding framework is flexible and can easily be used with other application or domain-dependent \emph{temporal bias functions}. \medskip\noindent\textbf{Exponential:} When exponential decay is used, we formulate the probability as follows. Given an arbitrary edge $e = (u, v, t)$, each temporal neighbor $w \in \Gamma_t(v)$ has the following probability of being selected: \begin{equation}\label{eq:exponential-penalty} \Pr(w) = \frac{\exp\!\big[ \tau(w) - \tau(v)\big]}{\sum_{w^\prime \in \Gamma_t(v)} \exp\!\big[ \tau(w^\prime) - \tau(v) \big]} \end{equation}\noindent Note that we abuse the notation slightly here and use $\tau$ to mean the mapping to the corresponding time. This is similar to the exponentially decaying probability of consecutive contacts observed in the spread of computer viruses and worms~\cite{holme2012temporal}. \medskip\noindent\textbf{Linear:} Here, we define $\delta : V \times \RR^{+} \rightarrow \mathbb{Z}^{+}$ as a function which sorts temporal neighbors in descending order time-wise. The probability of each temporal neighbor $w \in \Gamma_t(v)$ of node $v$ at time $t$ is then defined as: \begin{equation}\label{eq:linear-penalty} \Pr(w) = \frac{\delta(w)}{\sum_{w^\prime \in \Gamma_t(v)} \delta(w^\prime)} \end{equation}\noindent This distribution biases the selection towards edges that are closer in time to the current node. \subsubsection{Temporal Context Windows} Since temporal walks preserve time, it is possible for a walk to run out of \emph{temporally valid} edges to traverse. Therefore, we do not impose a strict length on the temporal random walks. Instead, we simply require each temporal walk to have a minimum length $\omega$ (in this work, $\omega$ is equivalent to the context window size for skip-gram \cite{skipgram-old}). A maximum length $L$ can be provided to accommodate longer walks. A temporal walk $\mathcal{S}_{t_i}$ with length $|\mathcal{S}_{t_i}|$ is considered valid iff \[ \omega \leq |\mathcal{S}_{t_i}| \leq L \] Given a set of temporal random walks $\{ \mathcal{S}_{t_1}, \mathcal{S}_{t_2}, \cdots, \mathcal{S}_{t_k}\}$, we define the temporal context window count $\beta$ as the total number of context windows of size $\omega$ that can be derived from the set of temporal random walks. Formally, this can be written as: \begin{equation} \label{eq:stopping-criterion} \beta \, = \sum_{i=1}^{k} \big( |\mathcal{S}_{t_i}| - \omega + 1\big) \end{equation} \noindent When deriving a set of temporal walks, we typically set $\beta$ to be a multiple of $N = |V|$. Note that this is only an implementation detail and is not important for Online CTDNEs. \begin{figure*}[t!] \centering \includegraphics[width=0.28\linewidth]{fig3.pdf} \hspace{4mm} \includegraphics[width=0.28\linewidth]{fig4.pdf} \hspace{4mm} \includegraphics[width=0.28\linewidth]{fig5.pdf} \vspace{-1mm} \caption{Frequency of \emph{temporal random walks} by length} \label{fig:temporal-walk-length-freq} \end{figure*} \subsection{Learning Dynamic Node Embeddings} \label{sec:time-preserving-embeddings} \noindent Given a temporal walk $\mathcal{S}_{t}$, we can now formulate the task of learning time-preserving node embeddings in a CTDN as the optimization problem: \begin{align} \label{eq:obj-func} \max_{f} \; \log \Pr \big(\,W_T = \{v_{i-\omega},\cdots,v_{i+\omega} \} \setminus v_i \;|\; f(v_i) \big) \end{align}\noindent where $f : V \rightarrow \RR^{D}$ is the node embedding function, $\omega$ is the context window size for optimization, and \[ W_T = \{v_{i-\omega},\cdots,v_{i+\omega} \} \]\noindent such that \[ \mathcal{T}(v_{i-\omega},v_{i-\omega+1}) < \cdots < \mathcal{T}(v_{i+\omega-1},v_{i+\omega}) \]\noindent is an arbitrary temporal context window $W_{T} \subseteq S_t$. For tractability, we assume conditional independence between the nodes of a temporal context window when observed with respect to the source node $v_i$. That is: \begin{align} \label{eq:conditional-indep} \Pr \big(\,W_T | f(v_i) \big) = \prod_{v_{i+k} \in W_T} \Pr \big(v_{i+k} | f(v_i) \big) \end{align} \noindent We can model the conditional likelihood of every source-nearby node pair $(v_i, v_j)$ as a softmax unit parameterized by a dot product of their feature vectors: \begin{align}\label{eq:cond-ll} \Pr \big(\,v_j | f(v_i) \big) = \frac{\exp\!\big[ f(v_j) \cdot f(v_i)\big]}{\sum_{v_k \in V} \exp\!\big[ f(v_k) \cdot f(v_i) \big]} \end{align}\noindent Using Eq.~\ref{eq:conditional-indep}-~\ref{eq:cond-ll}, the optimization problem in Eq.~\ref{eq:obj-func} reduces to: \begin{align}\label{eq:obj-func-simplifies} \max_{f} \; \sum_{v_i \in V} \Bigg( - \log Z_i + \sum_{v_{j} \in W_T} f(v_j) \cdot f(v_i) \Bigg) \end{align}\noindent where the term $Z_i = \sum_{v_j \in V} \exp\!\big[ f(v_i) \cdot f(v_j) \big]$ can be approximated by negative sampling. Given a graph $G$, let $\mathbb{S}$ be the space of all possible random walks on $G$ and let $\mathbb{S}_{T}$ be the space of all temporal random walks on $G$. It is straightforward to see that the space of temporal random walks $\mathbb{S}_{T}$ is contained within $\mathbb{S}$, and $\mathbb{S}_{T}$ represents only a tiny fraction of possible random walks in $\mathbb{S}$. Existing methods sample a set of random walks $\mathcal{S}$ from $\mathbb{S}$ whereas this work focuses on sampling a set of \emph{temporal random walks} $\mathcal{S}_t$ from $\mathbb{S}_{T} \subseteq \mathbb{S}$ (Fig.~\ref{fig:space-of-random-walks}). In general, the probability of an existing method sampling a temporal random walk from $\mathbb{S}$ by chance is extremely small and therefore the vast majority of random walks sampled by these methods represent sequences of events between nodes that are invalid (not possible) when time is respected. \smallskip \begin{Claim} Fix $L>0$, then $|\mathbb{S}| \gg |\mathbb{S}_D| \gg |\mathbb{S}_T|$. \end{Claim} \smallskip \noindent Therefore, previous methods that learn embeddings from random walks are unlikely to generate \emph{temporally valid sequences} of events/interactions between nodes that are actually possible when time is respected. {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[h!] \vspace{-2mm} \centering \begin{algorithm}[H] \caption{\,\small Continuous-Time Dynamic Network Embeddings } \label{alg:temporal-node2vec} { \begin{spacing}{1.15} \fontsize{7.5}{8.5}\selectfont \begin{algorithmic}[1] \vspace{-1.3mm} \Require a dynamic network (graph stream) $G = (V,\E_T,\mathcal{T})$, temporal context window count $\beta$, context window size $\omega$, embedding dimensions $D$ \smallskip \State Initialize number of temporal context windows $C = 0$ \While {$\beta - C > 0$ } \State Sample an edge $e_{t}\!=\!(v,u)$ via $\mathbb{F}_s$ (or use new edge at time $t$) \State $t \leftarrow \mathcal{T}(e_{t})$ \State $S_t = \textsc{TemporalWalk}(G, e_{t}, t, L, \omega + \beta - C - 1)$ \label{algline:obtain-temporal-walk} \If {$|S_t| > \omega$} \State Add the \emph{temporal walk} $S_t$ to $\mathcal{S}_T$ \label{algline:add-temporal-walk-to-set} \State $C \leftarrow C + (|S_t| - \omega + 1)$ \EndIf \EndWhile \State $\mZ = \textsc{StochasticGradientDescent}(\omega, D, \mathcal{S}_T)$ \label{algline:SGD-with-temporal-walks} \Comment{update embeddings} \State \textbf{return} \emph{dynamic} node embeddings $\mZ$ \label{algline:return-learned-representation-matrix} \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-2mm} \end{figure}} {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[h!] \vspace{-9.2mm} \begin{algorithm}[H] \caption{\,\small Temporal Random Walk } \label{alg:temporal-random-walk}{ \begin{spacing}{1.15} \fontsize{7.5}{8.5}\selectfont \begin{algorithmic}[1] \vspace{-1.3mm} \Procedure{TemporalWalk}{$G^{\prime}$, $e=(s,r)$, $t$, $C$} \State Set $i \leftarrow s$ and initialize temporal walk $S_t = \big[\, s, r \,\big]$ \label{algline:temporal-walk-init-walk-and-add-start-node-function} \For{$p = 1$ {\bf to} $C - 1$} \label{algline:temporal-walk-for} \State $\Gamma_t(i) = \big\{(w, t^\prime) \,\, | \,\, e=(i,w, t^\prime) \in E_T \, \wedge \mathcal{T}(i) > t \big\} $ \label{algline:temporal-walk-get-neighbors} \If {$|\Gamma_t(i)| > 0$} \State Select node $j$ from distribution $\mathbb{F}_\Gamma (\Gamma_t(i))$ \label{algline:temporal-walk-alias-sample} \State Append $j$ to $S_t$ \label{algline:temporal-walk-add-node-function-to-list} \State Set $t \leftarrow \mathcal{T}(i,j)$ and set $i \leftarrow j$ \Else \; terminate temporal walk \EndIf \EndFor \label{algline:temporal-walk-for-end} \State \textbf{return} temporal walk $S_t$ of length $|S_t|$ rooted at node $s$ \label{algline:temporal-walk-return-temporal-walk} \EndProcedure \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-2mm} \end{figure} } We summarize the procedure to learn time-preserving embeddings for CTDNs in Algorithm~\ref{alg:temporal-node2vec}. Our procedure in Algorithm~\ref{alg:temporal-node2vec} generalizes the Skip-Gram architecture to learn continuous-time dynamic network embeddings (CTDNEs). However, the framework can easily be used for other deep graph models that leverage random walks (\emph{e.g.},~\cite{lee17-Deep-Graph-Attention}) as the temporal walks can serve as input vectors for neural networks. There are many methods that can be adapted to learn CTDN embeddings using \emph{temporal random walks} (\emph{e.g.}, node2vec~\cite{node2vec}, struc2vec~\cite{struc2vec}, role2vec~\cite{role2vec}) and the proposed framework is not tied to any particular approach. We point out that Algorithm~\ref{alg:temporal-node2vec} is useful for prediction tasks where the goal is to learn a model using all data up to time $t$ for prediction of a future discrete or real-valued attribute or state (\emph{e.g.}, if a link exists or not). Since this work evaluates CTDNEs for link prediction, we include it mainly for the reader to understand one evaluation strategy using CTDNE. However, other applications may require online incremental learning and updating of the embeddings in a streaming fashion as new edges arrive. Recall that CTDNE naturally supports such streaming settings where edges (or new nodes) arrive continuously over time~\cite{ahmed17streams} and the goal is to update the embeddings in real-time via fast efficient updates. In Algorithm~\ref{alg:CTDNE-online}, we present an online CTDNE learning framework for incrementally updating the node embeddings as new edges arrive over time from the edge stream. Consider an edge stream $e_1, e_2, \ldots, e_k,\ldots, e_{t-1}, e_{t}, \ldots$ with timestamped edges. Suppose a new edge $(v,u,t)$ arrives at time $t$ from the edge stream (Line~\ref{algline:online-CTDNE-while-edge-arrives}). Then we immediately update the graph by adding the edge $(v,u,t)$ to $E \leftarrow E \cup \{(v,u,t)\}$ as shown in Line~\ref{algline:online-CTDNE-add-edge-and-nodes-if-needed}.\footnote{At this point, we can also remove any stale edges as well, \emph{e.g.}, edges that occurred in the distant past defined by some $\Delta t$.} If either $v$ or $u$ are new nodes, \emph{i.e.}, $v \not\in V$ or $u \not\in V$, then we simply set $V \leftarrow V \cup \{v,u\}$. Notice that if $v,u \in V$ then $V \leftarrow V \cup \{v,u\}$ in Line~\ref{algline:online-CTDNE-add-edge-and-nodes-if-needed} has no impact. The next step is to sample a set of temporal walks $\mathcal{S}_{t}$ with the constraint that each temporal walk ends at the new edge $(v,u,t)$ from the edge stream (Line~\ref{algline:online-CTDNE-sample-temporal-walks}). We obtain temporal walks that end in $(v,u,t)$ by reversing the temporal walk and going backwards through time as shown in Figure~\ref{fig:online-temporal-walk}. This enables us to easily obtain a set of temporal walks that include the new edge, which will be used for incrementally updating the embeddings. Furthermore, since the goal is to obtain temporal walks that include the new edge, then we know $(v,u,t)$ will be at the end of the temporal walk (since by definition no other edge could have appeared after it), and we simply obtain the temporal walk by going backwards through time. Finally, we incrementally update the appropriate node embeddings using only the sampled temporal walks $\mathcal{S}_{t}$ ending at $(v,u,t)$ at time $t$ (Line~\ref{algline:online-CTDNE-update-embeddings}). In this work, we use online SGD updates (online word2vec)~\cite{kaji2017incremental,peng2017incrementally,luo2015online,li2017psdvec} to incrementally learn the embeddings as new edges arrive. However, other incremental optimization schemes can easily be used as well~\cite{duchi2011adaptive,flaxman2005online,zhao2012fast,schraudolph2007stochastic,ge2015escaping,ying2008online}. While Algorithm~\ref{alg:CTDNE-online} assumes the graph stream is infinite, the current and most recently updated embeddings $\vz_1, \vz_2, \ldots, \vz_N$ can be obtained at any time $t$. Concept drift is naturally handled by the framework since we incrementally update embeddings upon the arrival of each edge in the stream using walks that are temporally valid. Hence, the context and resulting embedding of a node changes temporally as the graph evolves over time. Furthermore, we can relax the requirement of updating the embeddings after every new edge, and instead, we can wait until a fixed number of edges arrive before updating the embeddings or wait until a fixed amount of time elapses. We call such an approach batched CTDNE updating. The only difference in Algorithm~\ref{alg:CTDNE-online} is that instead of performing an update immediately, we would wait until one of the above conditions become true and then perform a batch update. We can also drop edges that occur in the distant past or that have a very small weight. {\algrenewcommand{\alglinenumber}[1]{\fontsize{6.5}{7}\selectfont#1 } \newcommand{\multiline}[1]{\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{#1\strut}} \begin{figure}[t!] \vspace{-2mm} \centering \begin{algorithm}[H] \caption{\, Online Learning of Node Embeddings from Edge Streams (Online CTDNE) } \label{alg:CTDNE-online} { \begin{spacing}{1.15} \fontsize{8.0}{9.0}\selectfont \begin{algorithmic}[1] \vspace{-0.5mm} \Require a dynamic network (graph stream) $G$, embedding dimensions $D$ \Ensure dynamic node embeddings $\mZ$ at time $t$ \smallskip \While{new edge $(v,u,t)$ arrives at time $t$ from edge stream} \label{algline:online-CTDNE-while-edge-arrives} \State Add edge $(v,u,t)$ to $E \leftarrow E \cup \{(v,u,t)\}$ and $V \leftarrow V \cup \{v,u\}$ \label{algline:online-CTDNE-add-edge-and-nodes-if-needed} \State Sample temporal walks $\mathcal{S}_{t}$ ending in edge $(v,u,t)$ \label{algline:online-CTDNE-sample-temporal-walks} \State Update embeddings via online SGD/word2vec using only $\mathcal{S}_{t}$ \label{algline:online-CTDNE-update-embeddings} \EndWhile \vspace{0.2mm} \end{algorithmic} \end{spacing}} \end{algorithm} \vspace{-7mm} \end{figure} } \vspace{-4mm} \subsection{Hyperparameters} \noindent While other methods have a lot of hyperparameters that require tuning such as node2vec~\cite{node2vec}, the proposed framework has a single hyperparameter that requires tuning. Note that since the framework is general and flexible with many interchangeable components, there is of course the possibility of introducing additional hyperparameters depending on the approaches used to bias the temporal walks. \medskip\noindent\textbf{Arbitrary temporal walk length}: Unlike walks in static graphs, temporal walks in the proposed framework can be of any arbitrary length. In particular, the user does not need to select the length of the walks to sample as required by static embedding methods~\cite{node2vec,deepwalk}, among the many other hyperparameters required by such methods. As an aside, the temporal context size $\omega$ is not specific to the framework, but arises from the base embedding method that we used. For instance, suppose node2vec/deepwalk is used as the base embedding method in the proposed framework, then $\omega$ is simply the context/window size, and therefore, the only requirement on the length of the walk is that it is at least as large as $\omega$, which ensures at least one temporal context can be generated from it. This is obviously better than node2vec/deepwalk, which requires selecting at least $L$, $R$, and $\omega$. Figure~\ref{fig:node-occur-temporal-walks} investigates the number of times each node appears in the sampled temporal walks. We also study the frequency of starting a temporal random walk from each node in Figure~\ref{fig:node-starting-temporal-walk-freq}. \begin{figure}[t!] \centering \includegraphics[width=0.46\linewidth]{fig6.pdf} \hfill \includegraphics[width=0.46\linewidth]{fig8.pdf} \vspace{-1mm} \caption{Number of occurrences of each node in the set of sampled temporal walks.} \label{fig:node-occur-temporal-walks} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.46\linewidth]{fig9.pdf} \hfill \includegraphics[width=0.46\linewidth]{fig11.pdf} \vspace{-1mm} \caption{Frequency of starting a temporal random walk from each node. Unlike previous approaches that sample a fixed number of random walks for each node, the proposed framework samples an edge between two nodes to obtain a timestamp to begin the temporal random walk. } \label{fig:node-starting-temporal-walk-freq} \end{figure} \section{Theoretical Analysis} \label{sec:complexity} \noindent Let $N=|V|$ denote the number of nodes, $M=|E_T|$ be the number of edges, $D = $ dimensionality of the embedding, $R = $ the number of temporal walks per node, $L = $ the maximum length of a temporal random walk, and $\Delta = $ the maximum degree of a node. Recall that while $R$ is not required, we use it here since the number of temporal random walks $|\mathcal{S}_T|$ is a multiple of the number of nodes $N=|V|$ and thus can be written as $RN$ similar to previous work. \subsection{Time Complexity} \noindent \begin{Lemma} The time complexity for learning CTDNEs using the generalized Skip-gram architecture in Section~\ref{sec:time-preserving-embeddings} is \begin{equation}\label{eq:time-complexity-CTDNE-biased} \mathcal{O}(M + N (R \log M + R{L}\Delta + D)) \end{equation}\noindent and the time complexity for learning CTDNEs with \emph{unbiased} temporal random walks (uniform) is: \begin{equation}\label{eq:time-complexity-DTDNE-biased} \mathcal{O}(N (R \log M + R{L}\log \Delta + D)) \end{equation}\noindent \end{Lemma} \noindent \noindent\textsc{Proof}. The time complexity of each of the three steps is provided below. We assume the exponential variant is used for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ since this CTDNE variant is the most computationally expensive among the nine CTDNE variants expressed from using uniform, linear, or exponential for $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$. Edges are assumed to be ordered by time such that $\mathcal{T}(e_1) \leq \mathcal{T}(e_2) \leq \cdots \leq \mathcal{T}(e_{M})$. Similarly, the neighbors of each node are also ordered by time. \textbf{Initial Temporal Edge Selection:} To derive $\mathbb{F}_s$ for any of the variants used in this work (uniform, linear, exponential) it takes $\mathcal{O}(M)$ time since each variant can be computed with a single or at most two passes over the edges. Selecting an initial edge via $\mathbb{F}_s$ takes $\mathcal{O}(\log M)$ time. Now $\mathbb{F}_s$ is used to select the initial edge for each temporal random walk $S_{t} \in \mathcal{S}_T$ and thus an initial edge is selected $RN=|\mathcal{S}_T|$ times. This gives a total time complexity of $\mathcal{O}(M + RN \log M)$.\footnote{Note for uniform initial edge selection, the time complexity is linear in the number of temporal random walks $\mathcal{O}(RN)$.} \textbf{Temporal Random Walks:} After the initial edge is selected, the next step is to select the next temporally valid neighbor from the set of temporal neighbors $\Gamma_{t}(v)$ of a given node $v$ at time $t$ using a (weighted) distribution $\mathbb{F}_{\Gamma}$ (\emph{e.g.}, uniform, linear, exponential). Note $\mathbb{F}_{\Gamma}$ must be computed and maintained for each node. Given a node $v$ and a time $t_{*}$ associated with the previous edge traversal in the temporal random walk, the first step in any variant (uniform, linear, exponential; Section~\ref{sec:temporal-random-walk}) is to obtain the ordered set of temporal neighbors $\Gamma_{t}(v) \subseteq \Gamma(v)$ of node $v$ that occur after $t_{*}$. Since the set of all temporal neighbors is already stored and ordered by time, we only need to find the index of the neighbor $w \in \Gamma(v)$ with time $t>t_{*}$ as this gives us $\Gamma_{t}(v)$. Therefore, $\Gamma_{t}(v)$ is derived in $\log |\Gamma(v)|$ via a binary search over the ordered set $\Gamma(v)$. In the worst case, $\mathcal{O}(\log \Delta)$ where $\Delta = \max_{v \in V} |\Gamma(v)|$ is the maximum degree. After obtaining $\Gamma_{t}(v) \subseteq \Gamma(v)$, we derive $\mathbb{F}_{\Gamma}$ in $\mathcal{O}(\Delta)$ time when $d_v = \Delta$. Now, selecting the next temporally valid neighbor according to $\mathbb{F}_{\Gamma}$ takes $\mathcal{O}(\log \Delta)$ for exponential and linear and $o(1)$ for uniform. For the uniform variant, we select the next temporally valid neighbor in $o(1)$ constant time by $j \sim \textrm{UniformDiscrete}\{1,2,\ldots,|\Gamma_t(v)|\}$ and then obtain the selected temporal neighbor by directly indexing into $\Gamma_t(v)$. Therefore, the time complexity to select the next node in a biased temporal random walk is $\mathcal{O}(\log \Delta + \Delta) = \mathcal{O}(\Delta)$ in the worst case and $\mathcal{O}(\log \Delta)$ for unbiased (uniform). For a temporal random walk of length ${L}$, the time complexity is $\mathcal{O}({L}\Delta)$ for a biased walk with linear/exponential and $\mathcal{O}({L} \log \Delta)$ for an unbiased walk. Therefore, the time complexity for $RN$ biased temporal random walks of length ${L}$ is $\mathcal{O}(RN{L}\Delta)$ in the worst case and $\mathcal{O}(RN{L}\log \Delta)$ for unbiased. \textbf{Learning Time-dependent Embeddings:} For the Skip-Gram-based generalization given in Section~\ref{sec:time-preserving-embeddings}, the time complexity per iteration of Stochastic Gradient Descent (SGD) is $\mathcal{O}(ND)$ where $D \ll N$. While the time complexity of a single iteration of SGD is less than a single iteration of Alternating Least Squares (ALS)~\cite{pilaszy2010fast}, SGD requires more iterations to obtain a good enough model and is sensitive to the choice of learning rate~\cite{yun2014nomad,oh2015fast}. Moreover, SGD is more challenging to parallelize compared to ALS~\cite{pilaszy2010fast} or Cyclic Coordinate Descent (CCD)~ \cite{kim2014algorithms,rossi2015dsaa-pcmf}. Nevertheless, the choice of optimization scheme depends on the objective function of the embedding method generalized via the CTDNE framework. \subsection{Space Complexity} \noindent Storing the $F_{s}$ distribution takes $\mathcal{O}(M)$ space. The temporal neighborhoods do not require any additional space (as we simply store an index). Storing $\mathbb{F}_{\Gamma}$ takes $\mathcal{O}(\Delta)$ (which can be reused for each node in the temporal random walk). The embedding matrix $\mZ$ takes $\mathcal{O}(ND)$ space. Therefore, the space complexity of CTDNEs is $\mathcal{O}(M + ND + \Delta) = \mathcal{O}(M + ND)$. This obviously holds in the online stream setting where edges arrive continuously over time and updates are made in an online fashion since this is a special case of the more general CTDNE setting. \begin{table}[b!] \vspace{-3mm} \centering \renewcommand{\arraystretch}{1.15} \fontsize{8}{9}\selectfont \setlength{\tabcolsep}{6.0pt} \caption{Dynamic Network Data and Statistics.} \vspace{-2.5mm} \label{table:dynamic-network-stats} \begin{tabular}{r l ll c H@{}} \multicolumn{6}{@{}p{0.94\linewidth}}{\footnotesize Let $|E_T|$ = number of \emph{temporal edges}; $\bar{d}$ = average temporal node degree; and $d_{\max}$ = max temporal node degree. } \\ \toprule & & & & \textbf{Timespan} \\ \textbf{Dynamic Network} & $|E_T|$ & $\bar{d}$ & $d_{\max}$ & \textbf{(days)} \\ \midrule \text{ia-contact} & 28.2K & 206.2 & 2092 & 3.97 \\ \text{ia-hypertext} & 20.8K & 368.5 & 1483 & 2.46 \\ \text{ia-enron-employees} & 50.5K & 669.8 & 5177 & 1137.55 \\ \text{ia-radoslaw-email} & 82.9K & 993.1 & 9053 & 271.19 \\ \text{ia-email-EU} & 332.3K & 674.1 & 10571 & 803.93 \\ \text{ fb-forum} & 33.7K & 75.0 & 1841 & 164.49 \\ \text{soc-bitcoinA} & 24.1K & 12.8 & 888 & 1901.00 \\ \text{soc-wiki-elec} & 107K & 30.1 & 1346 & 1378.34 \\ \bottomrule \end{tabular} \end{table} \section{Experiments} \label{sec:exp} \noindent The experiments are designed to investigate the effectiveness of the proposed \emph{continuous-time dynamic network embeddings} (CTDNE) framework for prediction. To ensure the results and findings of this work are significant and meaningful, we investigate a wide range of temporal networks from a variety of application domains with fundamentally different structural and temporal characteristics. A summary of the dynamic networks used for evaluation and their statistics are provided in Table~\ref{table:dynamic-network-stats}. All networks investigated are continuous-time dynamic networks with $\mathbb{T} = \RR^{+}$. For these dynamic networks, the time scale of the edges is at the level of seconds or milliseconds, \emph{i.e.}, the edge timestamps record the time an edge occurred at the level of seconds or milliseconds (finest granularity given as input). Our approach uses the finest time scale available in the graph data as input. All data is from NetworkRepository~\cite{nr} and is easily accessible for reproducibility. We designed the experiments to answer four important questions. First, are \emph{continuous-time dynamic network embeddings} (CTDNEs) more useful than embeddings from methods that ignore time? Second, how do the different embedding methods from the CTDNE framework compare? Third, are CTDNEs better than embeddings learned from a sequence of discrete snapshot graphs that approximate the edge stream (DTNE methods)? Finally, can we incrementally learn node embeddings fast using the online CTDNE framework? \subsection{Experimental setup} \noindent Since this work is the first to learn embeddings over an edge stream (CTDN), there are no methods that are directly comparable. Nevertheless, we first compare CTDNE against node2vec~\cite{node2vec}, DeepWalk~\cite{deepwalk}, and LINE~\cite{line}. For node2vec, we use the same hyperparameters ($D=128$, $R=10$, $L=80$, $\omega = 10$) and grid search over $p,q\in \{0.25, 0.50, 1, 2, 4\}$ as mentioned in~\cite{node2vec}. The same hyperparameters are used for DeepWalk (with the exception of $p$ and $q$). Unless otherwise mentioned, CTDNE methods use $\omega = 10$ and $D=128$. For LINE, we also use $D=128$ with 2nd-order-proximity and number of samples $T=$ 60 million. \begin{table}[h!] \centering \small \fontsize{8}{9}\selectfont \renewcommand{\arraystretch}{1.15} \setlength{\tabcolsep}{2.0pt} \caption{Results for Temporal Link Prediction (AUC).} \label{table:link-pred-results} \vspace{-2.4mm} \begin{tabularx}{1.00\linewidth}{r cc X c r} \toprule \textbf{Dynamic Network} & \textbf{DeepWalk} & \textbf{Node2Vec} & \textbf{LINE} & \textbf{CTDNE} & (\textsc{Gain}) \\ \midrule \text{ia-contact} & \text{0.845} & \textrm{0.874} & \textrm{0.736} & \textbf{0.913} & (\text{+10.37\%}) \\ \text{ia-hypertext} & \text{0.620} & \textrm{0.641} & \textrm{0.621} & \textbf{0.671} & (\text{+6.51\%}) \\ \text{ia-enron-employees} & \textrm{0.719} & \textrm{0.759} & \textrm{0.550} & \textbf{0.777} & (\text{+13.00\%}) \\ \text{ia-radoslaw-email} & \textrm{0.734} & \textrm{0.741} & \textrm{0.615} & \textbf{0.811} & (\text{+14.83\%}) \\ \text{ia-email-EU} & \textrm{0.820} & \textrm{0.860} & \textrm{0.650} & \textbf{0.890} & (\text{+12.73\%}) \\ \text{fb-forum} & \textrm{0.670} & \textrm{0.790} & \textrm{0.640} & \textbf{0.826} & (\text{+15.25\%}) \\ \text{soc-bitcoinA} & \textrm{0.840} & \textrm{0.870} & \textrm{0.670} & \textbf{0.891} & (\text{+10.96\%}) \\ \text{soc-wiki-elec} & \textrm{0.820} & \textrm{0.840} & \textrm{0.620} & \textbf{0.857} & (\text{+11.32\%}) \\ \bottomrule \multicolumn{6}{l}{\footnotesize $^{\star}$\textsc{Gain} = mean gain in AUC averaged over all embedding methods.} \\ \end{tabularx} \vspace{-2mm} \end{table} \subsection{Comparison} \label{sec:comparison} \noindent We evaluate the performance of the proposed framework on the temporal link prediction task. To generate a set of labeled examples for link prediction, we first sort the edges in each graph by time (ascending) and use the first $75\%$ for representation learning. The remaining $25\%$ are considered as positive links and we sample an equal number of negative edges randomly. Since the temporal network is a multi-graph where an edge between two nodes can appear multiple times with different timestamps, we take care to ensure edges that appear in the training set do not appear in the test set. We perform link prediction on this labeled data $\mathcal{X}$ of positive and negative edges. After the embeddings are learned for each node, we derive edge embeddings by combining the learned embedding vectors of the corresponding nodes. More formally, given embedding vectors $\vz_i$ and $\vz_j$ for node $i$ and $j$, we derive an edge embedding vector $\vz_{ij} = \Phi(\vz_i, \vz_j)$ where \begin{equation} \label{eq:embedding-functions} \nonumber \Phi \in \big\lbrace(\vz_i + \vz_j)\big/2,\;\; \vz_i \odot \vz_j,\;\; \abs{\vz_i - \vz_j},\;\; (\vz_i - \vz_j)^{\circ 2}\big\rbrace \end{equation}\noindent and $\vz_i \odot \vz_j$ is the element-wise (Hadamard) product and $\vz^{\circ 2}$ is the Hadamard power. We use logistic regression (LR) with hold-out validation of $25\%$. Experiments are repeated for 10 random seed initializations and the average performance is reported. Unless otherwise mentioned, we use ROC AUC (denoted as AUC for short) to evaluate the models and use the same number of dimensions $D$ for all models. To compare the methods fairly, we ensure all baseline methods use the same amount of information for learning. In particular, the number of \emph{temporal context windows} is \begin{equation} \label{eq:stopping-criterion} \beta = R \times N \times (L - \omega + 1) \end{equation}\noindent where $R$ denotes the number of walks for each node and $L$ is the length of a random walk required by the baseline methods. Recall that $R$ and $L$ are \emph{not} required by CTDNE and are only used above to ensure that all methods use exactly the same amount of information for evaluation purposes. Note since CTDNE does not collect a fixed amount of random walks (of a fixed length) for each node as done by many other embedding methods~\cite{deepwalk,node2vec}, instead the user simply specifies the $\#$ of temporal context windows (expected) per node and the total number of temporal context windows $\beta$ is derived as a multiple of the number of nodes $N=|V|$. Hence, CTDNE is also easier to use as it requires a lot less hyperparameters that must be carefully tuned by the user. Observe that it is possible (though unlikely) that a node $u \in V$ is not in a valid temporal walk, \emph{i.e.}, the node does not appear in any temporal walk $S_t$ with length at least $|S_t| > \omega$. If such a case occurs, we simply relax the notion of temporal random walk for that node by ensuring the node appears in at least one random walk of sufficient length, even if part of the random walk does not obey time. As an aside, relaxing the notion of temporal random walks by allowing the walk to sometimes violate the time-constraint can be viewed as a form of regularization. Results are shown in Table~\ref{table:link-pred-results}. For this experiment, we use the simplest CTDNE variant from the proposed framework and did not apply any \emph{additional bias} to the selection strategy. In other words, both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ are set to the uniform distribution. We note, however, that since temporal walks are time-obeying (by Definition~\ref{def:temporal-walk}), the selection is already biased towards edges that appear later in time as the random walk traversal does not go back in time. In Table~\ref{table:link-pred-results}, the proposed approach is shown to perform consistently better than DeepWalk, node2vec, and LINE. This is an indication that important information is lost when temporal information is ignored. Strikingly, the CTDNE model does not leverage the bias introduced by node2vec~\cite{node2vec}, and yet still outperforms this model by a significant margin. We could have generalized node2vec in a similar manner using the proposed framework from Section~\ref{sec:framework}. Obviously, we can expect to achieve even better predictive performance by using the CTDNE framework to derive a continuous-time node2vec generalization by replacing the notion of random walks in node2vec with the notion of \emph{temporal random walks} biased by the (weighted) distributions $\mathbb{F}_s$ (Section~\ref{sec:selection-of-start-time}) and $\mathbb{F}_{\Gamma}$ (Section~\ref{sec:temporal-random-walk}). \begin{table}[h!] \vspace{-4mm} \centering \setlength{\tabcolsep}{3.0pt} \renewcommand{\arraystretch}{1.15} \small \fontsize{8}{9}\selectfont \caption{Results for Different CTDNE Variants} \label{table:variants-link-pred-results} \vspace{-2.4mm} \begin{tabularx}{1.00\linewidth}{ll c XXXX @{}} \multicolumn{7}{p{1.0\linewidth}}{\footnotesize $\mathbb{F}_s$ is the distribution for initial edge sampling and $\mathbb{F}_{\Gamma}$ is the distribution for temporal neighbor sampling. } \\ \toprule \multicolumn{2}{c}{\textsc{Variant}} \\ \multicolumn{1}{c}{\fontsize{11}{12}\selectfont $\mathbb{F}_s$} & \multicolumn{1}{c}{\fontsize{11}{12}\selectfont $\mathbb{F}_{\Gamma}$} && \multicolumn{1}{l}{\textsf{\fontsize{7.5}{8.5}\selectfont contact}} & \textsf{\fontsize{7.5}{8.5}\selectfont hyper} & \textsf{\fontsize{7.5}{8.5}\selectfont enron} & \multicolumn{1}{l}{\textsf{\fontsize{7.5}{8.5}\selectfont rado}} \\ \midrule \fontsize{8.5}{9.5}\selectfont $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.913 & 0.671 & 0.777 & 0.811 \\ $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.903 & 0.665 & 0.769 & 0.797 \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.915 & 0.675 & 0.773 & 0.818 \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.903 & 0.667 & 0.782 & 0.806 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && \textbf{0.921} & 0.681 & \textbf{0.800} & 0.820 \\ $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-edge}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && 0.913 & 0.670 & 0.759 & 0.803 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Unif}$ (Eq.~\ref{eq:uniform-neighbor}) && 0.920 & \textbf{0.718} & 0.786 & \textbf{0.827} \\ $\mathbf{Lin}$ (Eq.~\ref{eq:linear-dist}) & $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-penalty}) && 0.916 &0.681 & 0.782 & 0.823 \\ $\mathbf{Exp}$ (Eq.~\ref{eq:exponential-dist}) & $\mathbf{Lin}$ (Eq.~\ref{eq:linear-penalty}) && 0.914 & 0.675 & 0.747 & 0.817\\ \bottomrule \end{tabularx} \vspace{-2mm} \end{table} In all cases, the proposed approach significantly outperforms the other embedding methods across all dynamic networks (Table~\ref{table:link-pred-results}). The mean gain in AUC averaged over all embedding methods for each dynamic network is shown in Table~\ref{table:link-pred-results}. Notably, CTDNE achieves an overall gain in AUC of $11.9\%$ across all embedding methods and graphs. These results indicate that modeling and incorporating the temporal dependencies in graphs is important for learning appropriate and meaningful network representations. It is also worth noting that many other approaches that leverage random walks can also be generalized using the proposed framework~\cite{struc2vec,ComE,ASNE,dong2017metapath2vec,lee17-Deep-Graph-Attention}, as well as any future state-of-the-art embedding method. \vspace{-2mm} \subsection{Comparing Variants from CTDNE Framework} \label{sec:exp-variants} \noindent We investigate three different approaches for $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ giving rise to nine different CTDNE variants by taking all possible combinations of unbiased and biased distributions discussed in Section~\ref{sec:selection-of-start-time} and Section~\ref{sec:temporal-random-walk}. In particular, we investigated three different approaches to sample (1) the starting temporal edge $e_*$ via $\mathbb{F}_s$, and (2) each subsequent edge in a temporal random walk via $\mathbb{F}_{\Gamma}$. For learning dynamic node embeddings in an online fashion, $\mathbb{F}_s$ is not required since for each new edge $(i,j,t)$ in the graph stream, we sample a number of temporal walks ending at $(i,j)$ and use these to update the embedding. Overall, we find that using a biased distribution (\emph{e.g.}, linear or exponential) improves predictive performance in terms of AUC when compared to the uniform distribution on many graphs. For others however, there is no noticeable gain in performance. This can likely be attributed to the fact that most of the dynamic networks investigated have a relatively short time span (more than 3 years at most). Table~\ref{table:variants-link-pred-results} provides results for a few other variants from the framework. In particular, Table~\ref{table:variants-link-pred-results} shows the difference in AUC when applying a biased distribution to the initial edge selection strategy $\mathbb{F}_s$ as well as the temporal neighbor selection $\mathbb{F}_{\Gamma}$ for the temporal random walk. Interestingly, using a biased distribution for $\mathbb{F}_s$ seems to improve more on the tested datasets. However, for \text{ia-enron-employees}, the best result can be observed when both distributions are biased. \subsection{Continuous vs. Discrete Approximation-based Embeddings} \noindent We also investigate the difference between discrete-time models that learn embeddings from a sequence of discrete snapshot graphs, and the class of continuous-time embeddings proposed in this paper. \begin{Definition}[\sc DTDN Embedding] \label{def:DTDNE} A discrete-time dynamic network embedding (DTDNE) is defined as any embedding derived from a sequence of discrete static snapshot graphs $\mathcal{G} = \{G_1,G_2,\ldots,G_t\}$. This includes any embedding learned from temporally smoothed static graphs or any representation derived from the initial sequence of discrete static graphs. \end{Definition}\noindent Previous work for temporal networks have focused on DTDNE methods as opposed to the class of CTDNE methods proposed in this work. Notice that DTDNE methods use \emph{approximations} of the actual dynamic network whereas the CTDN embeddings do not and leverage the actual valid temporal information without any temporal loss. In this experiment, we create discrete snapshot graphs and learn embeddings for each one using the previous approaches. As an example, suppose we have a sequence of $T=4$ snapshot graphs where each graph represents a day of activity and further suppose $D=128$. For each snapshot graph, we learn a $(D/T)$-dimensional embedding and concatenate them all to obtain a $D$-dimensional embedding and then evaluate the embedding for link prediction as described previously. \begin{table}[b!] \vspace{-8mm} \centering \small \fontsize{8}{9}\selectfont \renewcommand{\arraystretch}{1.10} \setlength{\tabcolsep}{2.0pt} \caption{Results Comparing DTDNEs to CTDNEs (AUC)} \label{table:link-pred-results-discrete-model} \vspace{-2.4mm} \begin{tabularx}{1.0\linewidth}{@{}r cc cc c @{}rH} \multicolumn{8}{p{1.0\linewidth}}{\footnotesize CTDNE-Unif uses uniform for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ whereas CTDNE-Opt selects the distributions via model learning (and hence corresponds to the best model). } \\ \toprule \textbf{Dynamic Network} && \textbf{DTDNE} && \textbf{CTDNE-Unif} \; & \textbf{CTDNE-Opt} & \;(\textsc{Gain}) \\ \midrule \text{ia-contact} && \textrm{0.843} && \text{0.913} & \textbf{0.921} & (\text{+8.30\%}) \\ \text{ia-hypertext} && 0.612 && \text{0.671} & \textbf{0.718} & (\text{+9.64\%}) \\ \text{ia-enron-employees} && 0.721 && \text{0.777} & \textbf{0.800} & (\text{+7.76\%}) \\ \text{ia-radoslaw-email} && 0.785 && \text{0.811} & \textbf{0.827} & (\text{+3.31\%}) \\ \bottomrule \multicolumn{8}{p{1.0\linewidth}}{\footnotesize $^{\star}$\textsc{Gain} = mean gain in AUC averaged over all embedding methods.} \\ \end{tabularx} \end{table} A challenging problem common with DTDNE methods is how to handle nodes that are not active in a given static snapshot graph $G_i$ (\emph{i.e.}, the node has no edges that occur in $G_i$). In such situations, we set the node embedding for that static snapshot graph to all zeros. However, we also investigated using the node embedding from the last active snapshot graph as well as setting the embedding of an inactive node to be the mean embedding of the active nodes in the given snapshot graph and observed similar results. More importantly, unlike DTDNE methods that have many issues and heuristics required to handle them (\emph{e.g.}, the time-scale, how to handle inactive nodes, etc), CTDNEs do not. CTDNEs also avoid many other issues~\cite{CTDNE} discussed previously that arise from DTDN embedding methods that use a sequence of discrete static snapshot graphs to approximate the actual dynamic network. For instance, it is challenging and unclear how to select the ``best'' most appropriate time-scale used to create the sequence of static snapshot graphs; and the actual time-scale is highly dependent on the temporal characteristics of the network and the underlying application. More importantly, all DTDNs (irregardless of the time-scale) are \emph{approximations} of the actual dynamic network. Thus, any DTDN embedding method is inherently lossy and is only as good as the discrete approximation of the CTDN (graph stream). Results are provided in Table~\ref{table:link-pred-results-discrete-model}. Since node2vec always performs the best among the baseline methods (Table~\ref{table:link-pred-results}), we use it as a basis for the DTDN embeddings. For brevity, we show results for each of the networks used previously in Table~\ref{table:variants-link-pred-results}. Overall, the proposed CTDNEs perform better than DTDNEs as shown in Table~\ref{table:link-pred-results-discrete-model}. Note that CTDNE in Table~\ref{table:link-pred-results-discrete-model} corresponds to using uniform for both $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$. Obviously, better results can be achieved by learning $\mathbb{F}_s$ and $\mathbb{F}_{\Gamma}$ automatically as shown in Table~\ref{table:variants-link-pred-results}. The gain in AUC for each graph is shown in the rightmost column in Table~\ref{table:link-pred-results-discrete-model}. The mean gain in AUC of CTDNE compared to DTDNE over all graphs is $7.25\%$. \definecolor{typeTwoColor}{RGB}{222,45,38} \definecolor{typeOneColor}{RGB}{49,130,189} \definecolor{typeThreeColor}{RGB}{77,172,38} \makeatletter \global\let\tikz@ensure@dollar@catcode=\relax \makeatother \tikzstyle{every node}=[font=\large,line width=1.5pt] \begin{figure}[h!] \centering \begin{center} \scalebox{0.5}{ \centering \begin{tikzpicture}[->,>=latex,shorten >=2.4pt,auto,node distance=2.6cm,thick, main node/.style={circle,draw=white,fill=typeOneColor,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, red node/.style={circle,draw=white,fill=typeTwoColor,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, white node/.style={circle,draw=white,fill=white,text=white,draw,text=white,minimum width=0.9cm,font=\sffamily\Large\bfseries}, whitesmall node/.style={circle,fill=white,draw=white,minimum width=0.02cm,font=\sffamily\Large\bfseries}] \node[main node] (3) {}; \node[main node] (10) [left of=3, left=9mm] {}; \node[main node] (1) [below left of=3, left=2mm] {}; \node[main node] (4) [below right of=1, left=0.1mm] {}; \node[white node] (44) [below left of=1, left=4mm] {}; \node[white node] (444) [below left of=1, left=8mm, above=1mm] {}; \node[white node] (55) [above left of=1, left=4mm] {}; \node[white node] (66) [left of=1] {}; \node[red node] (2) [below right of=3] {$\mathbf{k}$}; \node[main node] (9) [below right of=2] {}; \node[red node] (5) [right of=2, right=5mm] {$\mathbf{i}$}; \node[red node] (6) [below right of=5, right=5mm] {$\mathbf{j}$}; \node[white node] (88) [above of=2, below=13mm, left=0mm] {}; \node[whitesmall node] (99) [below of=2, above=15mm, left=5mm] {}; \node[whitesmall node] (999) [left of=9] {}; \node[white node] (7) [left of=1] {$\mathbf{---}$}; \node[white node] (8) [right of=5] {$\mathbf{---}$}; \node[white node] (111) [below of=66, left=0mm] {}; \node[white node] (222) [below of=8, right=5mm] {}; \tikzstyle{LabelStyle}=[below=3pt] \path[every node/.style={font=\large \sffamily}] (10) edge [left] node [above left] {$\mathbf{t_1}$} (3) (1) edge [right] node [above left] {$\mathbf{t_2}$} (2) (4) edge [] node[anchor=center,below] {$\mathbf{t_3}$} (2) (9) edge [right] node[above left] {$\mathbf{t_6}$} (6) (55) edge [dashed, left] node[below left] {} (1) (44) edge [dashed, left] node[below left] {} (4) (44) edge [dashed, right] node[below right] {} (2) (66) edge [dashed, left] node[below left] {} (1) (99) edge [dashed] node[below=0pt] {} (9) (999) edge [dashed] node[below=0pt] {} (9) (3) edge[bend left] node[sloped,anchor=center,above] {$\mathbf{t_4}$} (5) (5) edge [line width=0.5mm] node[anchor=center,above] {\Large \sffamily \bf t} (6) (2) edge[right, line width=0.5mm] node[sloped,anchor=center,above] {$\mathbf{t_5}$} (5) (111) edge [thick,line width=1.5mm,draw=black, below right] node [below right] {\;\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \Large \bf time} (222); \end{tikzpicture} } \end{center} \vspace{-5mm} \caption{ Temporal Walks for Online CTDNEs. Given a new edge $(i, j,t)$ at time $t$, we immediately add it to the graph and then sample temporal walks ending at edge $(i,j)$ and use them to update the relevant embeddings. An example of a temporal walk is $k \!\! \rightarrow \! i \!\! \rightarrow \! j$ (red nodes). Note $t > t_6 > t_5 > t_4 > t_3 > t_2 > t_1$. In this example, $k$ and $j$ are the training instances. Hence, $\vz_i$ is updated every time $i$ is used in a temporal edge. } \label{fig:online-temporal-walk} \vspace{-5mm} \end{figure} \subsection{Incremental Learning of Node Embeddings} \label{sec:exp-online-learning} \noindent For some applications, it is important to incrementally learn and update embeddings from edges as soon as they arrive in a streaming fashion. In such a streaming online setting, we perform fast partial updates to obtain updated embeddings in real-time. Given an edge $(i,j,t)$ at time $t$, we simply obtain a few temporal walks ending at $(i,j)$ and use these to obtain the updated embeddings. An example is shown in Figure~\ref{fig:online-temporal-walk}. In these experiments, we use online SGD updates (online word2vec)~\cite{kaji2017incremental,peng2017incrementally,luo2015online,li2017psdvec} to incrementally learn the embeddings as new edges arrive. However, other incremental optimization schemes can be used as well (\emph{e.g.}, see~\cite{duchi2011adaptive,flaxman2005online,zhao2012fast,schraudolph2007stochastic,ge2015escaping,ying2008online}). We vary the number of temporal walks sampled for every new edge that arrives. Results are shown in Table~\ref{table:streaming-results}. Notably, it takes on average only a few milliseconds to update the embeddings across a wide variety of temporal network streams. These results are from a python implementation of the approach and thus the runtime to process a single edge in the stream can be significantly reduced even further using a C++ implementation of the incremental/online learning approach. \begin{table}[h!] \centering \renewcommand{\arraystretch}{1.15} \scriptsize \footnotesize \renewcommand{\arraystretch}{1.10} \setlength{\tabcolsep}{6.0pt} \caption{Streaming Online Network Embedding Results} \vspace{-2.4mm} \label{table:streaming-results} \begin{tabular}{r HH llH HH H ccc HH} \multicolumn{12}{p{0.9\linewidth}}{\footnotesize Average runtime (in milliseconds) per edge is reported. We vary the number of walks per new edge from 1 to 10. Recall $|E_T|$ = \# of \emph{temporal edges} and $\bar{d}$ = average temporal node degree. } \\ \toprule & & & & & & & & & \multicolumn{3}{c}{\bf $\mathbf{Time}$ (ms.)} \\ \cmidrule(l{3pt}r{3pt}){6-12} \textbf{Dynamic Network} & & & $|E_T|$ & $\bar{d}$ & & & & & $\mathbf{1}$ & $\mathbf{5}$ & $\mathbf{10}$ & \\ \midrule \text{ia-hypertext} & & & 20.8K & 368.5 & && && 2.769 & 3.721 & 4.927 \\ \text{fb-forum} & & & 33.7K & 75.0 & && && 2.875 & 3.412 & 4.230 \\ \text{soc-wiki-elec} & & & 107K & 30.1 & && && 2.788 & 3.182 & 3.813 \\ \text{ia-contact} & & & 28.2K & 206.2 & && && 2.968 & 4.490 & 6.119 \\ \text{ia-radoslaw-email} & & & 82.9K & 993.1 & && && 3.266 & 5.797 & 8.916 \\ \text{soc-bitcoinA} & & & 24.1K & 12.8 & && && 2.679 & 2.965 & 3.347 \\ \bottomrule \end{tabular} \vspace{-2mm} \end{table} \subsection{Discussion} \label{sec:exp-discussion} \noindent Recently, there has been a wide variety of works that are based on the key idea proposed in our shorter manuscript from early 2018~\cite{CTDNE}, which is to leverage temporal walks to extend existing embedding methods, \emph{e.g.}, see~\cite{huang2020temporal,node2bits-arxiv,kumar2019predicting,beres2019node,trivedi2018dyrep,sajjad2019efficient,heidari2020evolving}. This includes temporal walks based on either BFS and/or DFS. For temporal clarity, these works were not compared against or reviewed previously in detail. However, we briefly summarize some of these recent works. In particular, node2bits~\cite{node2bits-arxiv} used the idea of temporal walks to learn space-efficient dynamic embeddings for user stitching. There has been some work for temporal bipartite edge streams where an RNN-based model is proposed to embed users and items by leveraging the notion of a 1-hop temporal walk used in this work~\cite{kumar2019predicting}. Other work has used the proposed temporal walks to learn embeddings for tracking and measuring node similarity in edge streams~\cite{beres2019node}. More recently, some work has also used the proposed idea of leveraging temporal walks for embeddings to extend Graph Neural Networks (GNNs)~\cite{huang2020temporal}. In particular, these works use BFS-based temporal walks. Notably, all of these works are based on complex deep learning techniques that leverage temporal walks, yet they achieve comparable results on some problems. \section{Challenges \& Future Directions} \label{sec:discussion} \noindent\textbf{Attributed Networks \& Inductive Learning}: The proposed framework for learning \emph{dynamic node embeddings} can be easily generalized to \emph{attributed networks} and for \emph{inductive learning} tasks in temporal networks (graph streams) using the ideas introduced in~\cite{role2vec,ahmed17Gen-Deep-Graph-Learning}. More formally, the notion of attributed/feature-based walk (proposed in~\cite{role2vec,ahmed17Gen-Deep-Graph-Learning}) can be combined with the notion of temporal random walk as follows: \begin{Definition}[\sc Attributed Temporal Walk] \label{def:attr-temporal-random-walk} Let $\vx_i$ be a $d$-dimensional feature vector for node $v_i$. An attributed temporal walk $S$ of length $L$ is defined as a sequence of adjacent node feature-values $\phi(\vx_{i_{1}}), \phi(\vx_{i_{2}}),\ldots, \phi(\vx_{i_{L+1}})$ associated with a sequence of indices $i_{1}, i_{2}, \ldots, i_{L+1}$ such that {\smallskip \begin{compactenum} \item $(v_{i_{t}}, v_{i_{t+1}}) \in E_T$ for all $1 \leq t \leq L$ \item $\mathcal{T}(v_{i_{t}}, v_{i_{t+1}}) \leq \mathcal{T}(v_{i_{t+1}}, v_{i_{t+2}})$ for $1 \leq t < L$ \item $\phi : \vx \rightarrow y$ is a function that maps the input vector $\vx$ of a node to a corresponding feature-value $\phi(\vx)$. \end{compactenum}\noindent }\noindent The feature sequence $\phi(\vx_{i_{1}}), \phi(\vx_{i_{2}}),\ldots, \phi(\vx_{i_{L+1}})$ represents the feature-values that occur during a temporally valid walk, i.e., a walk they obeys the direction of time defined in (2). \end{Definition}\noindent Attributed temporal random walks can be either uniform (unbiased) or non-uniform (biased). Furthermore, the features used in attributed walks can be (i) intrinsic input attributes (such as profession, political affiliation), (ii) structural features derived from the graph topology (degree, triangles, etc; or even node embeddings from an arbitrary method), or both. Temporal attriuted walks can be sampled for every feature as done in~\cite{node2bits-arxiv}. In this case, $\phi : \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ and thus we have $d$ different feature-based walks for every temporal walk sampled. Suppose $\phi$ is the identity function, then for an arbitrary temporal walk $\lbrace(v_{i_{1}}, v_{i_{2}}, t_{i_{1}})$, $(v_{i_{2}},v_{i_{3}}, t_{i_{2}}), \ldots, (v_{i_{L}},$ $v_{i_{L+1}}, t_{i_{L}})\rbrace$ such that $t_{i_{1}} \leq t_{i_{2}} \leq \ldots \leq t_{i_{L}}$ we have the following $d$ attributed temporal walks (one per feature): \begin{align} \begin{matrix} X_{i_{1},1} & X_{i_{2},1} & \cdots & X_{i_{k},1} & \cdots \\ X_{i_{1},2} & X_{i_{2},2} & \cdots & X_{i_{k},2} & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ X_{i_{1},d} & X_{i_{2},d} & \cdots & X_{i_{k},d} & \cdots \\ \end{matrix} \end{align} A recent work called node2bits~\cite{node2bits-arxiv} leveraged this idea for learning inductive dynamic node embeddings and demonstrated its effectiveness compared to a variety of state-of-the-art methods. We refer the reader to~\cite{node2bits-arxiv} for detailed results and findings. \medskip\noindent\textbf{Other Types of Temporal Networks}: While this work naturally supports temporal networks and graph streams in general, there are many other networks with more specialized characteristics. For instance, some temporal networks (graph streams) contain edges with start and end times. Developing CTDNE methods for such temporal networks remains a challenge. Furthermore, another open and challenging problem that remains to be addressed is how to develop graph stream embedding techniques that require a fixed amount of space. Other applications may require dynamic node embedding methods that are space-efficient (\emph{e.g.}, by learning a sparse vector representation for each node). \medskip\noindent\textbf{Temporal Weighting and Bias}: This paper explored a number of temporal weighting and bias functions for decaying the weights of data that appears further in the past. More research is needed to fully understand the impact and to understand the types of temporal networks and characteristics that each should be used. Some early work has focused on temporally weighting the links, nodes, and attributes prior to learning embeddings~\cite{rossi2012dynamic-srl}. However, this idea has yet to be explored for learning general node embeddings and should be investigated in future work. Other research should investigate new temporal weighting schemes for links, nodes, and attributes~\cite{rossi2012dynamic-srl}. Furthermore, one can also incorporate a decay function for each temporal walk such that more temporal influence is given to recent nodes in the walk than to nodes in the distant past. Hence, each temporal walk is assigned a sequence of weights which can be incorporated into the Skip-Gram approach. For instance, in the case of an exponential decay function $\alpha^{t-1} \cdot \alpha^{t-2} \cdots \alpha^{t-k}$. However, there are many other ways to temporally weight or bias the walk and it is unclear when one approach works better than another. Future work should systematically investigate different approaches. \section{Conclusion} \label{sec:conc} \noindent In this work, we described a new class of embeddings based on the notion of temporal walks. This new class of embeddings are learned directly from the temporal network (graph stream) without having to approximate the edge stream as a sequence of discrete static snapshot graphs. As such these embeddings can be learned in an online fashion as they are naturally amenable to graph streams and incremental updates. We investigated a framework for learning such dynamic node embeddings using the notion of temporal walks. The proposed approach can be used as a basis for generalizing existing (or future state-of-the-art) random walk-based embedding methods for learning of dynamic node embeddings from dynamic networks (graph streams). The result is a more appropriate dynamic node embedding that captures the important temporal properties of the node in the continuous-time dynamic network. By learning dynamic node embeddings based on temporal walks, we avoid the issues and information loss that arise when time is ignored or approximated using a sequence of discrete static snapshot graphs. In contrast to previous work, the proposed class of embeddings are learned from temporally valid information. The experiments demonstrated the effectiveness of this new class of dynamic embeddings on several real-world networks. \makeatletter \IEEEtriggercmd{\reset@font\normalfont\fontsize{7.9pt}{8.40pt}\selectfont} \makeatother \IEEEtriggeratref{1}
{ "timestamp": "2020-07-20T02:18:41", "yymm": "1904", "arxiv_id": "1904.06449", "language": "en", "url": "https://arxiv.org/abs/1904.06449" }
\section{Introduction} The theory of quantum fields in curved spacetimes remains, to this day, one of the leading methods for studying quantum corrections to the evolution of the spacetime geometry. Historically, one of the most notable results originating from this theory is that of Hawking radiation~\cite{Hawking1975}: the formation of a black hole entails the emission of thermal radiation, which through back-reaction can translate into the slow depletion of the black-hole mass. This quantum effect was later also shown to occur in the presence of cosmological horizons in an expanding universe~\cite{GH1977}.\par The semiclassical theory is characterised by keeping a classical spacetime background framework, but introducing quantum corrections to the energy momentum density which governs its dynamics. This is achieved through calculating the renormalised stress-energy tensor (RSET) of quantum fields, which takes the stress-energy content of the quantum vacuum to the same footing as the classical matter term in the Einstein equations. This semiclassical approach can be considered an intermediate step in the way toward a quantum theory of gravity, in which the interactions of matter and spacetime would reveal their quantum characteristics more naturally and fully (see e.g. \cite{BD} for a discussion on its interpretation as an approximation to quantum gravity). The general consensus is that this approximation is valid in regions of spacetime with sufficiently low, non-Planckian curvature (e.g. the horizon of astrophysical black holes), where it provides insightful information about quantum corrections to classical general relativity. These corrections are, however, suppressed by Planck's constant, and although they have far-reaching conceptual implications, they appear practically irrelevant for most astrophysical processes. Still, there might exist situations in which this suppression could be overcome, as we will see in this work.\par When quantising a field in a curved spacetime, there is no preferred slicing in spacelike hypersurfaces (orthogonal to a timelike vector field) on which to define an operator algebra. This leads to a corresponding ambiguity in the choice of the vacuum and particle states of the field. An interesting example which illustrates this ambiguity, and is particularly relevant to this work, is the following. If we consider a static black hole and quantise with respect to the Schwarzschild time coordinate, we get a definition of particles which becomes more similar to the one in Minkowski spacetime the farther away from the horizon you are, matching perfectly in the asymptotic region where spacetime is flat (the corresponding vacuum state is known as the Boulware vacuum \cite{Candelas1980,Boulware}). However, at the horizon itself operator expectation values in this state present a divergent behaviour, owing to the irregularity of the Schwarzschild time coordinate there. This particular quantisation is therefore deemed as nonphysical, backing up the idea that there should not exist \textit{eternal} black holes in our universe which would require it. A somewhat more physically reasonable scenario is that of an asymptotically flat (ignoring cosmological backgrounds) spacetime with an initially dispersed distribution of matter, which eventually collapses to form a black hole. In it, we can choose an asymptotically Minkowskian quantisation in the asymptotic past, which can be extended to the whole spacetime (through the solutions of the field equation). The vacuum state for this quantisation is known as the $in$ vacuum, for which the Hawking radiation result was obtained, and in which, unlike the case of the Boulware vacuum state, observables are regular at the horizon. The $in$ vacuum is in fact the physical vacuum state one should consider when dealing with a black-hole formation process in an asymptotically flat spacetime.\par When using this vacuum, the overall resulting picture is that any stellar-mass object which \emph{collapses rapidly toward the formation of a horizon} generates extremely small RSETs. It is important to stress that ``rapidly'' in the previous sentence corresponds precisely to the standard situation one would expect when working in the framework of general relativity (defined by the Einstein field equations coupled to matter satisfying the standard energy conditions \cite{Barcelo2002,Curiel2014}) and taking into account the forces that are known to play a role in stellar evolution. In these situations, semiclassical effects are in fact so small, that the collapse would proceed in almost exactly the same manner as in classical general relativity, forming a trapping horizon and continuing until the appearance of a Planck curvature region (see e.g. \cite{DFU,Parentani1994} for the first treatments of this problem and \cite{Barcelo2008,Unruh2018} for modern retakes). The crucial hypothesis of ``\emph{rapid approach toward the formation of a horizon}" is, therefore, perfectly sensible in most scenarios. However, the presence of a quantum bounce at Planck curvatures \cite{Barcelo2014,Barcelo2014b,Haggard2015} or the presence of metastable states before horizon formation \cite{Barcelo2008} might lead to situations in which this hypothesis is questionable. For instance, the divergent behaviour of the Boulware vacuum may be taken as a hint of the possibility that even in a physical vacuum, the surroundings of a black-hole horizon may be a region where semiclassical corrections become large enough to be relevant to the evolution of the system. Indeed, as we will show, the hypothetical formation of ultracompact objects sustained very close to horizon formation (an alternative to black holes \cite{Visser2004,Visser2008,Cardoso2017,Cardoso2019,Carballo-Rubio2018}) appears to require at least a semiclassical treatment. Generally, if the RSET contribution overcomes its suppression by Planck's constant and becomes comparable to the classical stress-energy tensor, then a complete, non-perturbative semiclassical treatment of the problem is in order.\par In this work we study the values of the RSET for the $in$ vacuum of a free massless scalar field in spherically-symmetric geometries which approach the formation of a horizon in different ways. Previous works with the same motivation have checked some of the semiclassical effects produced by a collapse of matter which quickly decelerates just before reaching the formation of a horizon \cite{Pad2009,Harada2019}. Our present goal is to more generally identify the precise geometric characteristics of the dynamical situations which would cause large back-reaction close to horizon formation.\par In section \ref{s2} we start by reviewing the definitions of the functions which measure the deviation from classical physics. One of them is the already mentioned RSET, which directly serves as a source in the Einstein equations. The other is the effective temperature function (ETF) introduced in \cite{BLSV11}. This is a generalisation of the Hawking temperature which characterises the flux of outgoing radiation at future infinity. As was shown in \cite{BBGJ16}, this function is directly related to the term in the RSET evaluated in dynamical vacua which regularises the divergence at the horizon of the static Boulware vacuum.\par Also in sec. \ref{s2}, we will introduce the generic structure of the geometries we will use, namely a Schwarzschild exterior and a Minkowski interior, separated by a thin spherical shell which moves radially along some timelike curve. We will provide a physical interpretation for the relations between the static null coordinates corresponding to the interior and exterior regions, which will allow us to understand how the modes of the massless field are dispersed when crossing the shell. After this, we will move on to specifying what types of trajectories for the shell will be used: oscillations close to (but above) the horizon, asymptotic approach to horizon formation and actual formation of a horizon at low velocities. These are the three types of curves which exploit the physics of being close to horizon formation. Before embarking on a detailed study of these cases, we will explain briefly the different notions of horizon that one can define and their relevance for our analysis. In this introduction we have been deliberately vague in this respect; let us only advance that ``horizon'' must be identified with the notion of apparent/trapping horizon and not with event horizon, unless explicitly stated. Let us also note that most of the results in this paper will be phrased in terms of an exterior Schwarzschild geometry, but they are in fact more general and will apply equally to any exterior geometry with a non-zero surface gravity at the horizon.\par In section \ref{s4} we will study shell trajectories which oscillate just above the Schwarzschild radius $r_{\rm s}=2M$, $M$ being the mass of the shell (note that we will always be using natural units $G=\hbar=c=1$). This will serve as an example which illustrates how the relations between the null coordinates are associated to semiclassical effects in different dynamical regimes close to horizon formation.\par In section \ref{s5} we will explore shell trajectories which approach the horizon so slowly that they do not reach it in finite time. For this case, as we will show, the thin-shell approximation misses some important effects. Therefore we will extend our study to an arbitrary spherically-symmetric geometry for the interior region. The results we are interested in will be obtained through a study of the asymptotic future values of the ETF, which will allow us to compare the $in$ vacuum state to the Boulware vacuum.\par In section \ref{s6} we will go back to the thin-shell approximation and study geometries in which a horizon is formed in finite time. In this case the asymptotic behaviour of both the ETF and the RSET are well-known \cite{Hawking1975,DFU} so we will focus on their values at times close to the formation of the horizon. Particularly, we will vary the velocity at which the shell crosses the $r=2M$ surface, paying close attention to the lower velocity results. In the final section we will summarise our findings. \section{Preliminaries}\label{s2} The overall aim of this work is to gauge the magnitude of semiclassical effects in a series of specific dynamical and spherically-symmetric geometries, characterised for being close to the formation of a horizon. As is common in black-hole physics, we will use a single massless scalar field as a probe. For the calculation of the RSET, we will be making use of an analytic approximation to its exact form, which amounts to considering only the $s$-wave contributions and neglecting the backscattering effects of the geometry. As we will see, the closeness of the spacetime to the formation of a horizon is going to be a key factor for the behaviour of the RSET. \subsection{Renormalised stress-energy tensor in 1+1 dimensions} The analytic approximation to the RSET we will be considering takes advantage of the conformal invariance of a massless scalar field equation in 1+1 dimensions. All 4-dimensional spherically-symmetric metrics can be written as \begin{equation}\label{21} ds^2=-C(u,v)dudv+r^2d\Omega_2^2, \end{equation} where $C(u,v)$ is a positive function of the radial null coordinates (non-zero for regular coordinates). The calculation of the dimensionally reduced 1+1 RSET attends exclusively to the $\{u,v\}$ coordinates, i.e. to the radial-temporal part of the geometry. In this approximation, $C(u,v)$ is the conformal factor which rescales the otherwise flat two-dimensional spacetime.\par A massless scalar field $\phi$ can be Fock-quantised on this background with a basis of solutions of the Klein-Gordon equation given by the ingoing and outgoing modes \begin{equation}\label{22} \phi_\omega^u=\frac{1}{\sqrt{4\pi\omega}}e^{-i\omega u},\quad \phi_\omega^v=\frac{1}{\sqrt{4\pi\omega}}e^{-i\omega v}, \end{equation} where $\omega>0$. Such a basis exists for any pair of null coordinates which cover the spacetime. Choosing a particular pair amounts to a choice of vacuum and particle states of the quantum field.\par The RSET is the renormalised vacuum expectation value of the stress-energy tensor operator constructed with the quantum field. For a choice of null coordinates $\{u,v\}$ and a corresponding vacuum $\ket{0}$, the components of the RSET in this same coordinate basis are (see \cite{DF}) \begin{subequations}\label{23} \begin{align} \expval{T_{uu}}_0&=\frac{1}{24\pi}\left[\frac{\partial_u^2C}{C}-\frac{3}{2}\left(\frac{\partial_uC}{C}\right)^2\right],\\ \expval{T_{vv}}_0&=\frac{1}{24\pi}\left[\frac{\partial_v^2C}{C}-\frac{3}{2}\left(\frac{\partial_vC}{C}\right)^2\right],\\ \expval{T_{uv}}_0&=\frac{1}{24\pi}\left[\frac{\partial_uC\partial_vC}{C^2}-\frac{\partial_u\partial_vC}{C}\right]. \end{align} \end{subequations} Only the trace of the RSET depends explicitly on the curvature: $\expval{T_\mu^\mu}=R/24\pi$ (see e.g. \cite{BD,DF,FN}). On the other hand, the traceless part of the RSET can be expressed in a geometric form in terms of the norm of the timelike vector field $\partial_t=\partial_u+\partial_v$ that defines the vacuum state $|0\rangle$ \cite{Barcelo2011}. This means that the traceless (and state-dependent) part of the RSET could in principle become large in regions of low curvature.\par The choice of quantisation modes and vacuum state for this theory is arbitrary. In a different vacuum state $\ket{\tilde{0}}$, corresponding to a quantisation characterised by a different pair of coordinates $\{\tilde{u},\tilde{v}\}$, related to the first through two positive functions $g$ and $h$ such that \begin{equation}\label{24} \frac{du}{d\tilde{u}}=g(\tilde{u}),\quad \frac{dv}{d\tilde{v}}=h(\tilde{v}), \end{equation} the components of the RSET in the first $\{u,v\}$ coordinate basis are related to those in the $\ket{0}$ vacuum through \begin{subequations}\label{25} \begin{align} \expval{T_{uu}}_{\tilde{0}}&=\frac{1}{24\pi}\left(\frac{g''}{g^3}-\frac{3}{2}\frac{g'^2}{g^4}\right)+\expval{T_{uu}}_0,\\ \expval{T_{vv}}_{\tilde{0}}&=\frac{1}{24\pi}\left(\frac{h''}{h^3}-\frac{3}{2}\frac{h'^2}{h^4}\right)+\expval{T_{vv}}_0,\\ \expval{T_{uv}}_{\tilde{0}}&=\expval{T_{uv}}_0, \end{align} \end{subequations} where $g'\equiv\partial_{\tilde{u}}g(\tilde{u})$ and $h'\equiv\partial_{\tilde{v}}h(\tilde{v})$. With these expressions we can see that a change in the vacuum state translates into the addition of outgoing and ingoing radiation flux terms (which can be positive or negative).\par Apart from the RSET, we are interested in studying the values of the effective temperature function (ETF) \cite{BLSV11}, defined as \begin{equation}\label{26} \kappa_{\tilde{u}}^u\equiv-\left.\frac{d^2\tilde{u}}{du^2}\right/\frac{d\tilde{u}}{du}=\frac{g'}{g^2} \end{equation} for the outgoing radiation sector, and likewise substituting $u$'s for $v$'s (and $g$ for $h$) for the ingoing sector. In the case of a spacetime representing the formation of a black hole, the usual Hawking effect is reflected in the constant value $\kappa_{u_{in}}^{u_{out}}=1/2=2\pi T_{\rm H},$ where $T_{\rm H}$ is the Hawking temperature in natural units. In more general terms, if $\kappa^u_{\tilde{u}}$ or $\kappa^v_{\tilde{v}}$ remain constant for a sufficiently long period of time (defined by an adiabaticity condition), the vacuum state defined by the $\{\tilde{u},\tilde{v}\}$ coordinates (through the modes in~\eqref{22}) will be seen by an observer with proper coordinates $\{u,v\}$ as a thermal state of outgoing or ingoing radiation respectively \cite{BLSV11}.\par This function is also directly related to the outgoing and ingoing radiation fluxes which appear in the RSET after a change of vacuum state \cite{BBGJ16}. Specifically, equations~\eqref{25} can be written as \begin{subequations}\label{27} \begin{align} \expval{T_{uu}}_{\tilde{0}}&=\frac{1}{24\pi}\left(\frac{d\kappa_{\tilde{u}}^u}{du}+\frac{1}{2}(\kappa_{\tilde{u}}^u)^2\right)+\expval{T_{uu}}_0,\\ \expval{T_{vv}}_{\tilde{0}}&=\frac{1}{24\pi}\left(\frac{d\kappa_{\tilde{v}}^v}{dv}+\frac{1}{2}(\kappa_{\tilde{v}}^v)^2\right)+\expval{T_{vv}}_0,\\ \expval{T_{uv}}_{\tilde{0}}&=\expval{T_{uv}}_0. \end{align} \end{subequations} In other words, the information about the difference between the RSETs in two different vacuum states is entirely contained in their relative ETFs (and first derivatives thereof).\par When calculating either of these quantities, the information about the choice of vacuum state is encoded in specific sets of null coordinates. The fact that the RSET is defined by relations between null coordinates is the reason why its expression is not generally given by geometric quantities that can be reduced to curvature invariants, and why it can in principle become large in regions of low curvature (as we will see explicitly). For the spacetimes we will study, we are interested in calculating these quantities for two special quantum vacuum states: the $in$ and the $out$ states. The $in$-state ($out$-state) is the one defined by affine null coordinates at past (future) null infinity. In order to carry out the calculation, we will want to extend these sets of coordinates throughout the whole spacetime, if possible, and obtain the relations between them. However, if there is a horizon present at some point, one or both of these extensions may cover the spacetime only partially. For example, in a collapsing geometry which starts being almost flat and ends up forming a black hole, the $in$-state corresponds to the natural Minkowski vacuum at the asymptotic past which then evolves according to the dynamics of the system. On the other hand, the $out$-state would correspond asymptotically to the Boulware state, and its extension backward in time would cover only the region of spacetime outside the event horizon. \subsection{Thin-shell geometries with spherical symmetry} The geometries that we will analyse all consist of an internal Minkowskian region pasted to an external Schwarzschild region of mass $M$ through a moving timelike shell. In the interior region one can write the metric as \begin{equation}\label{1} ds_-^2=-du_-dv_-+r_-^2d\Omega^2, \end{equation} where the subscript ``$-$" refers to the interior region, and the radial null coordinates are related to the Minkowski time $t_-$ and radius $r_-$ through \begin{equation}\label{2} u_-=t_--r_-,\quad v_-=t_-+r_-. \end{equation} Equivalently we can construct natural null coordinates in the Schwarzschild region as \begin{equation}\label{3} ds_+^2=-|f(r_+)|du_+dv_++r_+^2d\Omega^2, \end{equation} where $f(r)=1-2M/r$ is the redshift function, and in this case the null coordinates are related to the Schwarzschild time $t_+$ and radius $r_+$ through \begin{equation}\label{eq:u+def} u_+=\text{sign}\left[f(r_+)\right](t_+-r_+^*),\quad v_+=t_++r_+^*. \end{equation} Here $r_+^*$ is the tortoise coordinate obtained by integrating $dr_+^*=dr_+/f(r_+)$. The $u_+$ coordinate goes from $-\infty$ to $+\infty$, that is, between past null infinity and the Schwarzschild radius (if the exterior region reaches that far in). Inside the Schwarzschild radius (but outside the shell) we must define a different coordinate $u^i_+$, given by the same relation to $t_+$ and $r_+^*$ as $u_+$ above, and which goes from $-\infty$ at the horizon, until it reaches some point of the spacelike singularity at some finite value. On the horizon itself, relations with this variable can only be obtained as a limit from either side. The sign of $f(r_+)$ ensures that $u_+$ and $u^i_+$ advance in the same direction as $u_-$, both outside and inside the horizon.\par The two geometries are connected by a thin spherical shell of mass $M$. In general, this matching is only possible if the shell's radial position follows a spacetime curve of the same causality type as seen from either side. In our case, we will require that this be a timelike trajectory, parametrised by $v_-=T_-(u_-)$ from the inside and by $v_+=T_+(u_+)$ from the outside. Of course, given one of these curves the other is also fixed. For convenience we will also define the velocity parameters \begin{equation}\label{5} \alpha_-\equiv\left.\frac{dv_-}{du_-}\right|_{\rm shell},\qquad \alpha_+\equiv\left.\frac{dv_+}{du_+}\right|_{\rm shell} \end{equation} (which are simply the derivatives of $T_\pm$), both of which take values in $(0,\infty)$ for a timelike trajectory. For an ingoing shell to approach the speed of light would imply approaching the limit $\alpha_\pm\to0$. On the other hand, for an outgoing shell reaching the speed of light $\alpha_{\pm}\to\infty$. A static shell has $\alpha_\pm=1$.\par In order to complete the definition of this geometry, we must require that the metric be continuous at the shell. This will allow us to determine the trajectory of the shell as seen from one side if it is defined on the other. It will also allow us to extend the ``$+$" coordinates into the ``$-$" region and vice versa.\par From matching the null part of the line elements we obtain the functions defined in \eqref{24}, \begin{equation}\label{6} g=\frac{du_+}{du_-}=\left.\sqrt{\frac{\alpha_-}{|f|\alpha_+}}\right|_{\rm shell},\qquad h=\frac{dv_+}{dv_-}=\left.\sqrt{\frac{\alpha_+}{|f|\alpha_-}}\right|_{\rm shell}, \end{equation} which can be expressed in either the ``$+$" or ``$-$" variables. From matching the radial parts we get the relation between the velocity parameters of the shell from either side, \begin{equation}\label{7} \alpha_+=\text{sign}(f)+\frac{1}{2|f|}\frac{(1-\alpha_-)^2}{\alpha_-}-\frac{1}{2|f|}\frac{1-\alpha_-}{\alpha_-}\sqrt{4\alpha_- f+(1-\alpha_-)^2}. \end{equation} Thus if we define the trajectory in terms of $T_-$, we can obtain $T_+$ by integrating $\alpha_+$ from the same initial radial position. We can also obtain the relations $u_+(u_-)$ and $v_+(v_-)$ by integrating the functions $g$ and $h$.\par From the square root in \eqref{7} we deduce a condition for the continuous matching of the geometries, namely that the $\alpha_-$ parameter which defines the movement of their separation surface must be such that $4\alpha_- f+(1-\alpha_-)^2$ remains positive. In other words, $\alpha_-$ must tend to zero (the infalling shell must approach light-speed) inside the Schwarzschild radius in such a way as to compensate the increasingly negative value of the redshift function. The parameter $\alpha_-$ which satisfies \begin{equation} 4\alpha_- f|_{\rm shell}+(1-\alpha_-)^2=0 \end{equation} defines the slowest possible collapse inside the event horizon as seen from the (rapidly disappearing) Minkowski region. \subsection{Interpretation of the terms in $g$ and $h$} Let us focus on the function $g$ outside the Schwarzschild radius, \begin{equation}\label{8} g=\frac{du_+}{du_-}=\frac{1}{\sqrt{f}}\sqrt{\frac{\alpha_-}{\alpha_+}}. \end{equation} The presence of the term $1/\sqrt{f}$ is to be expected, as it represents the redshift experienced by an outgoing light ray. This can be seen most clearly in the case of a static shell (which, of course, would sit outside the horizon), for which $\alpha_\pm=1$. There, it is necessary for a rescaling of the coordinates compatible with a matching of the angular parts of the geometry.\par The $\alpha_-/\alpha_+$ term has a purely dynamical origin. The velocity of the shell seen by a static observer on one of its sides is different from the one seen by a static observer on the other. In their respective null coordinates this can be seen as a change in the slope of the line tangent to the shell trajectory, namely $\alpha_-\to\alpha_+$ (see fig.~\ref{f1}). From the perspective of the shell, which can use the appropriate coordinates for each side, this looks something like a spacetime refraction phenomenon. If a light ray incides with an angle $\theta$ with respect to the shell trajectory from the inside, it exits with an angle $\theta'$ related to the first by \begin{equation} \frac{\tan \theta'}{\tan \theta}=\frac{\alpha_-}{\alpha_+}. \end{equation} For the angles formed by an ingoing ray, the relation is the inverse of the above. \begin{figure} \centering \includegraphics[scale=.7]{text6950} \caption{Change in angle with respect to the shell of on outgoing light ray, as measured by static observers on either side.} \label{f1} \end{figure}\par Another way to interpret the $\alpha_-/\alpha_+$ term is as a kind of Doppler effect, even though technically there is no interaction between the matter in the shell and the light ray crossing it which could cause absorption and reemission. The effect can be seen clearly with the following. If we define \begin{equation} R(t_\pm)\equiv r|_{\rm shell}(t_\pm)\quad\text{and}\quad \dot{R}=dR/dt_-,\quad R'=dR/dt_+, \end{equation} then \begin{equation}\label{10} \alpha_-=\frac{1+\dot{R}}{1-\dot{R}},\quad \alpha_+=\frac{1+R'/f(R)}{1-R'/f(R)}. \end{equation} That is, the quotient $\alpha_-/\alpha_+$ represents the Doppler shift for a ray that is ``absorbed" at one side by a shell moving at a velocity $\dot{R}$ and ``reemitted" on the other by a shell moving at a different velocity, $R'/f(R)$. If the geometry on both sides were the same, there would be no net effect, as these velocities would be the same.\par It is worth mentioning that there may be a difficulty in interpreting the above expressions at the Schwarzschild radius, since the coordinate $t_+$ used for the derivative in the second equation in \eqref{10} is not regular there. To see the behaviour of $\alpha_+$ more clearly we can switch to a regular time coordinate, say the Painlevé-Gullstrand $\tau_+$ defined as the proper time of a free-falling observer from infinity in the Schwarzschild region \cite{Martel2001}, which satisfies \begin{equation} d\tau_+=dt_++\frac{\sqrt{1-f(r_+)}}{f(r_+)}dr_+. \end{equation} We can then define the radial velocity $R_{,\tau}\equiv dR/d\tau_+$, which is regular at the horizon. Then the second equation in \eqref{10} becomes \begin{equation}\label{12} \alpha_+=\frac{1+R_{,\tau}/(1+\sqrt{2M/R})}{1-R_{,\tau}/(1-\sqrt{2M/R})}, \end{equation} from which we can see that at $r=2M$, $\alpha_+=0$ and the function $g$ in \eqref{8} diverges.\par In light of these results, we will call the $1/\sqrt{|f|}$ terms in the functions $g$ and $h$ the ``redshift" terms, and the ones with a quotient of $\alpha$'s the ``Doppler" terms. Combining equations \eqref{6}, \eqref{10} and \eqref{12} we can write the total functions as \begin{subequations}\label{13} \begin{align} &g(u)=\frac{1}{\sqrt{f}}\sqrt{\frac{1+\dot{R}}{1-\dot{R}}}\sqrt{\frac{1-R_{,\tau}/(1-\sqrt{2M/R})}{1+R_{,\tau}/(1+\sqrt{2M/R})}},\\ &h(v)=\frac{1}{\sqrt{f}}\sqrt{\frac{1-\dot{R}}{1+\dot{R}}}\sqrt{\frac{1+R_{,\tau}/(1+\sqrt{2M/R})}{1-R_{,\tau}/(1-\sqrt{2M/R})}}, \end{align} \end{subequations} in which all quantities are evaluated at the points where the lines $u=const.$ and $v=const.$ intersect the shell trajectory respectively. We could work directly with these expressions instead of \eqref{6} by defining the trajectory through the velocities $\dot{R}$ and $R_{,\tau}$, which must satisfy a relation similar to \eqref{7}. However, throughout this work we will keep using the $\alpha_\pm$ parameters, as they are more natural and simple when dealing with the relations between null coordinates needed for the calculation of semiclassical quantities. \subsection{Geometries near horizon formation} The geometries we are going to study are all characterised by being close to horizon formation in a specific sense that is clarified in this section. Probably the best-known characterisation of black holes is the classical one in terms of event horizons, the definition of which exploits the notion of future null infinity in asymptotically flat spacetimes \cite{Hawking1973}. However, event horizons display a number of undesirable features, such as their lack of univocal relation with strong gravitational fields \cite{Ashtekar2004}, or their nonlocal nature that forbids their detection in experiments (that necessarily take place in finite regions of spacetime) \cite{Visser2014}. Even if the geometries analysed below contain event horizons, the notion of being close to horizon formation that is relevant to our analysis is always described in terms of local properties of spacetime and matter fields, and is closely related to (quasi-)local definitions of the boundaries of black holes in terms of apparent horizons. One can alternatively use the concept of trapping horizons or other equivalent definitions (see \cite{Gourgoulhon2008}, for instance, for a review); however, in the situations analysed here, all these definitions become equivalent, so there is no need for us to discuss their differences. In some of the geometries analysed below, the position of apparent/trapping horizons and event horizons are coincident. This should not be taken as an indication that our results are tied in any way with the formation of event horizons. In fact, it is always possible to deform these geometries in a way that event horizons are removed completely, but the local geometric conditions that eventually lead to their formation in the undeformed geometries are maintained for arbitrarily long times (for geometries in which apparent/trapping horizons are formed in finite time, this would imply that they remain present for a large, but finite, amount of time), which would yield the same results but for arbitrarily small deviations.\par One of the shortcomings of these (quasi-)local definitions of the boundaries of black holes (with respect to the notion of event horizon) is their non-uniqueness \cite{Ashtekar2005}. This issue disappears in practice when dealing with spherically-symmetric backgrounds, as one can focus on trapping horizons that are spherically-symmetric as well, the location of which turn out to be determined by the quasi-local Misner-Sharp mass \cite{Nielsen2008} that measures the overall energy enclosed in a given sphere \cite{Hayward1994}. When the external geometry to the shell is the Schwarzschild geometry, the location of the horizon defined this way is simply the Schwarzschild radius.\par In this work, ``close to horizon formation'' will therefore mean that the shell has trajectories exploring the surroundings of the Schwarzschild radius. There, we expect to find interesting semiclassical effects, and we want to understand their dependence on the precise dynamical properties of the spacetime as it approaches this point. To this end, we have chosen three types of shell trajectories, the study of which we believe will lead to the necessary insight for any general situation. The first type of situation, studied in sec. \ref{s4}, is when a shell oscillates between two radii, outside but near the Schwarzschild radius. This situation models in the easiest terms ultracompact objects subject to small pulsations. Varying the characteristics of this oscillation will allow us to explore a wide range of short-term dynamical behaviours. The second type of situation (sec. \ref{s5}) will explore the consequences of a long-term monotonous dynamical behaviour, particularly one which we expect (both \textit{a priori} and based on results of sec. \ref{s4}) to present interesting semiclassical effects -- a shell approaching the Schwarzschild radius asymptotically in a regular time coordinate. As we will see, in this study it will become apparent that we need to go beyond the thin-shell approximation. The asymptotic approach can be stopped at any time, so that these configurations could model, for example, a relaxation phase towards an ultracompact object. For the third type we go back to thin shells, and conclude our study with the case in which they actually form a horizon in finite regular time, though they do so while moving at an arbitrarily slow pace. Our analysis here, which is an extension to \cite{Barcelo2008}, allows us to clearly see how the strength of semiclassical effects depends crucially on the collapsing velocity at horizon formation. This will provide a counterpoint to the already well-known results for a shell collapsing at high velocities or even light speed (see e.g. \cite{FN}).\par It is worth mentioning at this point that our analysis throughout this work will be purely geometrical, and thus goes beyond the Einstein equations. In other words, we will be exploring the effects of a geometry on semiclassical quantities without being concerned with how the geometry itself is generated. In other words, we will not require that the evolution of the geometry be governed by the Einstein equation with a stress-energy tensor which satisfies some energy conditions. Although it is certainly interesting to study the properties of the matter content (both classical and semiclassical) which would generate the geometries in question, this lies beyond the scope of the present work. Instead, our geometry-based results will just point the way toward the configurations which should be analysed in further detail in future works in the context of semiclassical gravity. \section{Oscillating thin shells}\label{s4} In this section we will study the behaviour of the functions $g$ and $h$, which relate the ``$+$" and ``$-$" coordinates, when the shell gets near the formation of a horizon, but does not reach it. Nonetheless, it will follow a trajectory which covers a wide range of dynamical configurations, in which both the redshift and Doppler effects will have significant contributions to the values of these functions and their derivatives. Namely, we will consider a high-speed radial oscillation about a point just above the surface with radius $r_{\rm s}=2M$ (in the following, we will always take $r_{\rm s}=1$ for numerical evaluations). We will use three parameters to describe this movement: the distance $d$ of the centre of oscillation to the horizon, the amplitude $A$ and the frequency $\omega$. Then, the radius at which the shell is located will follow the spacetime curve (see fig.~\ref{f2}) \begin{equation} R(t_-)=r_{\rm s}+d+A\sin(\omega t_-). \end{equation} In order to avoid the formation of a horizon and maintain a timelike trajectory, the parameters must satisfy the relations \begin{equation} A<d\quad \text{and}\quad A\omega<1. \end{equation} We stress once again that the purpose of this study is to gain a better understanding of the relation between dynamical regimes close to horizon formation and the magnitude of semiclassical effects, and not to provide a self-consistent solution with a classical matter content which satisfies some energy conditions. Thus we only impose that the shell be causal, with no further restrictions to its trajectory.\par \begin{figure} \centering \includegraphics[scale=.6]{f2} \caption{Oscillatory radial trajectory of the shell (with parameters $d=0.1$, $A=0.099$ and $\omega=10$). The dashed line represents the $r=2M$ $(=1)$ surface, and the diagonal lines represent a light ray entering and exiting the interior region. Although not perceived in the figure, the thick oscillatory curve does not touch the $r=2M$ line.} \label{f2} \end{figure} Since the trajectory is described in terms of the interior coordinate system, we can obtain the simple expression for the interior velocity parameter \begin{equation} \alpha_-=\frac{r_{\rm s}+A\omega\cos(\omega t_-)}{r_{\rm s}-A\omega\cos(\omega t_-)}, \end{equation} while for $\alpha_+$ we must use eq. \eqref{7}. To evaluate these quantities on the points where the shell trajectory intersects the lines of constant $u$ or $v$ we must solve a transcendental equation, which we will do numerically. First we will obtain the individual values of the functions $g$ and $h$, which represent the change in the coordinate description of outgoing and ingoing radial light rays respectively. Then we will calculate the quotient $g/h$ with $h$ evaluated at a point of entry $v_-$ of a light ray into the Minkowski region and $g$ evaluated at the point of exit $u_-$, which carries information of how light rays suffer a temporal dispersion by passing through this region. Looking at eq. \eqref{2} we can see that an ingoing ray $v_-$ connects with an outgoing ray with $u_-=v_-$, so the quotient we are looking for is $g(v_-)/h(v_-)$. This quantity will also describe the evolution of the $in$ quantum vacuum state, defined at the asymptotically flat region at past null infinity, and its comparison with the $out$ vacuum state, defined at future null infinity. \begin{figure} \centering \includegraphics[scale=.5]{f31} \includegraphics[scale=.5]{f32} \includegraphics[scale=.5]{f33} \caption{Functions $g$ and $h$ for an oscillation with parameters $d=0.1$, $A=0.099$ and three different frequencies: $\omega=10$, $\omega=0.5$ and $\omega=0.1$. The peaks are produced when the shell is nearly at the closest point to the horizon, as will be discussed below. We observe that at low frequencies the functions practically coincide since the light rays enter and leave the interior region in a time much smaller than $\omega^{-1}$, so the in-crossing and out-crossing dispersion effects would almost cancel out (i.e. $g(u_-)/h(v_-=u_-)\simeq1$). At somewhat larger frequencies the light rays enter and exit at appreciably different points of the oscillation and the functions attain a relative displacement. Finally, at frequencies which make the shell move at nearly light-speed the displacement is greater still, and the peaks become somewhat tilted to one side for each function, due to the fact that the peaks of the sine function in $t_-$ become tilted when seen in the $u_-$ and $v_-$ coordinates (in opposite directions).} \label{f3} \end{figure} \par In figure \ref{f3} we observe the values of the functions $g$ and $h$ evaluated at $u_-$ and $v_-=u_-$, representing the dispersion of a light ray when it is exiting and entering the interior region respectively. The net effect, given by $g/h$, reduces to nearly unity when $g(u_-)\simeq h(u_-)$, which occurs when the shell is oscillating very slowly (at low $\omega$) compared to the time it takes for light to cross it (in the static limit, $g/h=1$). At higher frequencies the light rays enter and exit at completely different points of the oscillation, as in the case represented in fig.~\ref{f2}, and the net effect becomes appreciable. It is easy to notice that there are some special cases for this net effect corresponding to different resonances between the oscillation frequency and the crossing time of the light ray: say, when it enters crossing a maximum and also exits crossing one, or crossing a minimum, and a few other such situations. These will be studied in more detail in the following subsection. \subsection{Resonance between in-crossing and out-crossing effects} From equations \eqref{6} we can obtain the expression for the total temporal dispersion suffered by a light ray entering the shell at a point ``in" and exiting at a point ``out", \begin{equation}\label{17} \frac{du_{+,{\rm out}}}{dv_{+,in}}=\frac{g|_{\rm out}}{h|_{\rm in}}=\frac{\sqrt{f}|_{\rm in}}{\sqrt{f}|_{\rm out}}\left.\sqrt{\frac{\alpha_-}{\alpha_+}}\right|_{\rm out}\left.\sqrt{\frac{\alpha_-}{\alpha_+}}\right|_{\rm in}, \end{equation} where in the first step we have made use of the fact that for rays reflecting at the origin $dv_-|_{in}/du_-|_{out}=1$, as can be seen from \eqref{2}. We can see again that for a static shell, for which the surface redshift function would be constant and $\alpha_\pm=1$, this quotient reduces to unity. For a moving shell the effects can cancel out again only in one special case, which occurs when not only the $in$ and $out$ redshift functions are the same, but also when $\alpha_-=1$ (and therefore $\alpha_+=1$ as well, as can be seen from eq. \eqref{7}) at both points. For the case of an oscillating shell this can occur only when a light ray exists such that it both enters and exits at a minimum or at a maximum of $R(t_-)$. Then the effects cancel out locally, but they continue being non-trivial for the rest of the light rays. These local resonances are possible only when the frequency, amplitude and distance from the horizon satisfy the relations \begin{equation}\label{18} \omega=\frac{n\pi}{r_{\rm s}+d-A}, \quad\text{with }n\text{ integer less than }\quad\frac{r_{\rm s}+d-A}{a\pi}, \end{equation} for a ray entering and exiting at a minimum, and likewise \begin{equation}\label{19} \omega=\frac{n\pi}{r_{\rm s}+d+A}, \quad\text{with }n\text{ integer less than }\quad \frac{r_{\rm s}+d+A}{a\pi}, \end{equation} for a maximum. These expressions are obtained simply by comparing the ray crossing time and the oscillation periods in the coordinate $t_-$. The upper bound on the values of $n$ comes from the causal restriction $A \omega<1$. For a shell following an arbitrary (known) radial motion such cases can be found just as easily.\par On the other hand, if we want to see when a maximisation of $du_{\rm out}/dv_{\rm in}$ in eq. \eqref{17} takes place, a more detailed analysis is necessary. First, we may notice that when a light ray enters at a maximum of $R(t_-)$ and exits at a minimum, the total redshift effect is maximised. For such a ray to exist, the relation between the parameters must be \begin{equation}\label{20} \omega=\frac{\pi}{2}\frac{2n+1}{r_{\rm s}+d}, \quad\text{with }n\text{ integer less than }\quad \frac{r_{\rm s}+d}{A\pi}-\frac{1}{2}. \end{equation} In fig.~\ref{f4} we observe the three terms of the rhs of eq. \eqref{17} plotted (without the square roots) for this case. The peaks of the redshift term, which correspond to precisely the light rays described, reach their highest possible values for the parameters $A$ and $d$ used. \begin{figure} \centering \includegraphics[scale=.59]{f41} \includegraphics[scale=.59]{f42} \caption{Left: total redshift and Doppler terms plotted separately (the square root of their product gives $du_{\rm out}/dv_{\rm in}$) for on oscillation with parameters $d=0.1$, $A=0.099$ and $\omega\simeq9.996$, given by \eqref{20} with $n=3$. We see that even though the shell reaches 99\% of the speed of light (as seen in the Minkowski coordinates) and the Doppler terms become quite large, the redshift term clearly gives the dominant contribution around its maxima. Right: in-crossing and out-crossing Doppler terms plotted separately. We observe that the peaks and valleys are completely out of phase between the two.} \label{f4} \end{figure}\par Also in fig.~\ref{f4}, we observe that the individual Doppler terms have distinct maxima. For the in-crossing term the maximum is produced for a ray which enters slightly after the one which maximises redshift (which enters at a maximum of $R$), during the in-fall of the shell. For the out-crossing term it is produced for a ray which exits slightly before redshift maximising one (which exits at a minimum of $R$), so again during an in-fall of the shell. Guided by this result, we can look for the conditions which maximise the individual Doppler terms, and also see whether there is a frequency for the shell at which the two peaks coincide to make a maximum net effect. In fig.~\ref{f5} we can directly see the values which $\alpha_-/\alpha_+$ takes at different redshifts $f$ and velocity parameters $\alpha_-$.\par \begin{figure} \centering \includegraphics[scale=.45]{f5} \includegraphics[scale=.45]{f51} \caption{Values of the Doppler term $\alpha_-/\alpha_+$ as a function of $f$ and $\alpha_-$. The axes of $\alpha_-$ and $\alpha_-/\alpha_+$ have been rescaled with a function $\text{at}(x)=\frac{2}{\pi}\tan[-1](x)$ to scale down their whole range into $(0,1)$. The curve drawn on top of the surface on the left represents the values taken during a period of oscillation with parameters $d=0.1$, $A=0.099$ and $\omega=10$, and the curve on the right during an oscillation with parameters $d=10$, $A=9.99$ and $\omega=0.1$. The region with a sharp gradient close to the horizon ($f=0$) is produced around $\alpha_-=1$ ($1/2$ in the graphic), corresponding to the transition from falling inward (during which time $\alpha_-<1$) to going outward (during which $\alpha_->1$). At $f\to1$ the value of the Doppler term tends to 1 smoothly, as the interior and exterior geometries become the same.} \label{f5} \end{figure} At the very minimum of the oscillation of the shell (the closest point to $r=r_{\rm s}$), $\alpha_-/\alpha_+=1$ and there is no Doppler effect for any redshift $f$. When the shell is moving outward ($\alpha_->1$) but is still close to the horizon ($f\ll1$), from eq. \eqref{7} we get \begin{equation} \frac{\alpha_-}{\alpha_+}\simeq\frac{\alpha_-^2}{(1-\alpha_-)^2}f, \end{equation} that is, at a constant velocity the Doppler term has a linear dependence on the redshift function, with a slope which grows rapidly as $\alpha_-\to1^+$ and which tends to 1 as $\alpha_-\to\infty$. On the other hand, when the shell is falling in, the function close to the horizon can be expressed as \begin{equation} \frac{\alpha_-}{\alpha_+}\simeq\frac{(1-\alpha_-)^2}{f}, \end{equation} which grows parabolically as $\alpha_-\to0$ (as the in-fall speed increases) and hyperbolically as $f\to0$ (as the formation of the horizon is approached).\par With the above equations and fig.~\ref{f5} we can see that the point of the shell trajectory where the Doppler effect reaches a maximum appears in the $\alpha_-<1$ region, and that its precise position is influenced by two factors: at a constant $f$ it is maximum at the highest velocity (lowest $\alpha_-$), increasing parabolically as $\alpha_-$ decreases, while at a constant velocity it is maximum at the lowest $f$, with a hyperbolic divergence at $f=0$. If $\alpha_-$ approaches 1 while $f$ approaches 0, that is, if the shell tends to a full stop just before the formation of the horizon, then, when $f$ is sufficiently small, the hyperbolic divergence dominates over the parabolic tendency to zero and the maximum is reached at a point very close to the minimum value of $f$, just before the region of very large gradient observed in fig.~\ref{f5} is entered. If, on the other hand, the shell oscillations are produced far away from the Schwarzschild radius $r_{\rm s}$, the maximum Doppler effect is reached closer to the point of maximum in-fall velocity (minimum $\alpha_-$).\par As an example, in fig.~\ref{f6} we can see the almost-coincidence of the two Doppler peaks (it looks exact in the figure) for an oscillation which bounces at $d-A=10^{-3}r_{\rm s}$, with a frequency $\omega$ which allows rays which enter at a minimum of $R$ to also exit at a minimum. The rays which maximise the in-crossing and out-crossing Doppler effects almost coincide with the ones which cancel out the redshift effect, and even more so with each other. Even when the peaks do not exactly coincide, due to their widths the net Doppler effect given by their product can be very close to its maximum possible value.\par \begin{figure} \centering \includegraphics[scale=.59]{f61} \includegraphics[scale=.59]{f62} \caption{Left: total redshift and Doppler terms plotted separately (the square root of their product gives $du_{\rm out}/dv_{\rm in}$) for an oscillation with parameters $d=0.1$, $A=0.099$ and $\omega\simeq9.42$, given by eq. \eqref{18} with $n=3$. In this case we see the Doppler term clearly dominates. Right: in-crossing and out-crossing Doppler terms plotted separately. We observe the almost-coincidence of the two Doppler peaks, produced for two very close rays passing through the shell slightly before the one which enters and exits at a minimum of the oscillation. This near-coincidence results in the dominance of the net Doppler term in the left graph.} \label{f6} \end{figure} To conclude, these resonant cases have allowed us to understand the behaviour of the quotient $g/h$ around its highest values, and relate it to specific dynamical regimes of the shell. As we will see, the observed regions of rapid increase or decrease will have significant influence on the behaviour of semiclassical effects. \subsection{Semiclassical effects} So far we have studied the dispersion of light rays (or analogously, of modes of the massless scalar field) which cross the oscillating shell and pass through the interior Minkowski region. From these results we can directly calculate the semiclassical quantities discussed earlier, namely the ETF and the RSET. The behaviour of these quantities will be similar to that of the dispersion functions described above, as the former are constructed simply from derivatives of the latter. The structure of peaks and plateaus for each period of the oscillation will merely become more exaggerated for these new functions. For reference, the structure of the ETF due to a single interval of deceleration during collapse has been previously studied with some detail in \cite{Harada2019}. Our study is based on considerably different dynamics, but the results are qualitatively similar.\par In fig.~\ref{f7} we can see the ETF $\kappa_{u_{\rm in}}^{u_{\rm out}}$, which contains information of the flux of particles seen in the $in$ vacuum state by an inertial observer at future null infinity, calculated with the relation between the $in$ and $out$ coordinates given by the product of the functions plotted in fig.~\ref{f4} through eq. \eqref{17}. As can be guessed by observing the curves in fig.~\ref{f4}, the smaller peaks in $\kappa_{u_{\rm in}}^{u_{\rm out}}$ are produced around the maxima of the Doppler effect contributions. On the other hand, the largest negative and positive peaks are produced on the regions of large gradient on either side of the maximum of the redshift contribution (keep in mind that the horizontal axes of the two plots are rescaled versions of each other). Between each set of peaks there is a region of smoothly decreasing temperature, with values around the Hawking temperature.\par In fig.~\ref{f8} we have plotted the outgoing radiation flux at future null infinity, defined as the difference between $\expval{T_{u_{\rm out}u_{\rm out}}}$ evaluated for the $in$ and $out$ vacuum states. From equations \eqref{27} we see that this quantity depends on $\kappa_{u_{\rm in}}^{u_{\rm out}}$ and its derivative, explaining the somewhat similar, but amplified, characteristics. This quantity alone is representative of the highs and lows of the RSET during the oscillation, since the term which is missing is simply the Boulware vacuum polarisation, which maintains low values in the $u_{out}$ coordinate (outside the horizon it is below the Hawking flux value in fig.~\ref{f8}). It is the $u_{out}$ coordinate itself which tends to become non-regular, leading to a general amplification of both terms (tending to a divergence at the horizon if they do not perfectly compensate each other).\par \begin{figure} \centering \includegraphics[scale=.6]{f7} \caption{ETF $\kappa_{u_{\rm in}}^{u_{\rm out}}$ produced by an oscillating shell with the same parameters as the ones used for fig.~\ref{f4}, for which the net redshift effect is maximised. The small plot inside the main one is a magnification of the plateau region, along with a comparison with the value $\kappa_{\rm H}$ of the function in the case of Hawking radiation.} \label{f7} \end{figure} \begin{figure} \centering \includegraphics[scale=.6]{f8} \caption{Difference between the $u_{\rm out}u_{\rm out}$ components of the RSET in the $in$ and $out$ vacuum states, corresponding to the outgoing flux of radiation which appears at future null infinity. As in the case of the ETF, we observe periodic peaks, which correspond to the rays which enter at a maximum of the oscillation and exit at a minimum, which maximises the redshift effect, and a more flat intermediate region of values near that of the Hawking radiation flow produced after the formation of a horizon, superimposed in the right zoomed-in rectangle.} \label{f8} \end{figure} \begin{figure} \centering \includegraphics[scale=.6]{f9} \caption{ETF in the outgoing radiation sector, for an oscillation which maximises the net Doppler effect, obtained with the functions plotted in fig.~\ref{f6}.} \label{f9} \end{figure} In fig.~\ref{f9} we observe the ETF for the oscillation which maximises the net Doppler effect. The two most notable differences with respect to the case which maximises redshift are the somewhat cleaner large peaks, caused by a better overall coincidence in the aspects of the in-crossing and out-crossing effects around the minima of the oscillation, and a less clean intermediate region, caused in turn by a worse coincidence there.\par In order to give a more general picture of the semiclassical effects produced by this type of shell trajectory, we can study the consequence of changing the order of magnitude of each of the oscillation parameters. First, in fig.~\ref{f10} we see the behaviour of the ETF for an oscillation with the same proximity to the horizon (between 0.001 and 0.201 times $r_{\rm s}$) but with a much lower velocity, reaching at most about $0.15\%$ of the speed of light. In this case all semiclassical fluxes are greatly diminished, approaching the static shell limit in which the radiation temperature and flow become zero.\par \begin{figure} \centering \includegraphics[scale=0.6]{f10.pdf} \caption{ETF in the outgoing radiation sector, for an oscillation at a low velocity (less then or equal to $0.15\%$ the speed of light) and with a radial proximity to the horizon between $10^{-3}r_{\rm s}$ and $0.2r_{\rm s}$ (the same as in all the cases seen so far). Compared to the cases with higher velocities, we observe a significant decrease of its values and a smoothing of its derivative.} \label{f10} \end{figure} Another possibility is to maintain the maximum proximity to the horizon ($10^{-3}r_{\rm s}$) and the large maximum speed ($\sim99\%$ the speed of light), but to vary the amplitude of the oscillation. Decreasing the amplitude leads to a qualitatively similar result for the ETF: each period contains a cluster of large peaks (larger as the amplitude decreases) surrounded by a region of values close to the Hawking temperature. On the other hand, increasing the amplitude to above $0.1r_{\rm s}$ leads to a general decrease in the values at both the peaks and the intermediate regions.\par The last parameter we can vary is the proximity to the horizon. Understandably, if the shell oscillates very far from the horizon, the ETF and its first derivative become very small, even if the maximum velocities are large. On the other hand, if the shell is close to the horizon, around $10^{-3}r_{\rm s}$ or closer at the minimum of the oscillation, and its amplitude is not very large, then the closer it is, the larger the peaks become, but the intermediate region again remains at values around the Hawking temperature on average.\par \section{Approaching the horizon asymptotically}\label{s5} For the case of an oscillating shell studied so far, we have noted that the highest values of the functions which measure the dynamical semiclassical effects are produced when the horizon is approached at very low velocities (around the minima of the oscillation). In fig.~\ref{f5} this was seen through the large gradient in the Doppler term around $\alpha_-=1$ at low values of the redshift function $f$, since both the ETF and the RSET have a dependence precisely on the derivatives of this term. To explore the gap between the cases in which the shell bounces back before forming a horizon, and the ones in which it continues to fall and forms a black hole (which will be the subject of the next section), we can study shell trajectories which tend to the $r=r_{\rm s}$ surface asymptotically, i.e. those that approach the $f\to0$ and $\alpha_-\to1^-$ limit monotonously and reach it in an infinite regular time parameter. In \cite{BLSV04} it was first shown that configurations of this sort can lead to Hawking-like radiation with arbitrarily long duration without necessarily forming any type of horizon. Then in \cite{BLSV06} the same authors analysed more detailed configurations having in mind an analogue gravity setting. The present study reproduces those results and extends them further, with a more generalised approach in the construction of the geometries.\par To start with, we can use the same formalism as in the previous section, approximating a spherical distribution of matter with an infinitesimally thin shell. Following the same scheme as before, we can decide on a shell trajectory and then see how the ETF behaves, from which we can also guess how the RSET is modified in the dynamical $in$ vacuum with respect to the Boulware vacuum. Taking, for example, the trajectory $R=r_{\rm s}(1+e^{-v_-/r_{\rm s}})$ we obtain the ETF plotted in fig.~\ref{f11}, which rapidly tends to zero.\par When the ETF tends to zero, it leaves the RSET in the exterior Schwarzschild region to approach the divergence it has in the Boulware vacuum as the shell approaches $r_{\rm s}$, as can be seen clearly from eqs. \eqref{27}. In the Boulware vacuum, when matter has crossed its Schwarzschild radius there is a $1/f(r-r_{\rm s})$ type of divergence (with $f$ being the redshift function) \cite{Boulware,Visser96}. In this case, as the ETF between the $in$ and Boulware vacua tends to zero, the values of the RSET for the $in$ vacuum in the exterior $r>R$ region approach those for the Boulware vacuum. The plot in fig.~\ref{f11} therefore represents the decreasing difference in time between the physical RSET and a quantity which at the shell surface increases as $1/f(R)\sim 1/(R-r_{\rm s})\sim e^{v/r_{\rm s}}$, i.e. exponentially in time.\par \begin{figure} \centering \includegraphics[scale=.7]{f110} \caption{ETF for the outgoing radiation sector in the case of a thin spherical shell which follows a collapse trajectory $R=1+e^{-v_-}$ (as always, in $r_{\rm s}=1$ units). We observe that the function tends to zero, indicating that there is no finite outgoing radiation flux asymptotically, meaning the dynamical $in$ vacuum tends to the Boulware vacuum above the shell surface.} \label{f11} \end{figure} In fact, it turns out that for any such asymptotic approach of the shell to its Schwarzschild radius the result is qualitatively the same: an ETF which tends to zero and an RSET with rapidly increasing values. To understand why this occurs, we can look at the definition of the ETF in eq. \eqref{26}, and see that it only has a finite (constant) asymptotic value if the relation $du_{\rm in}/du_{\rm out}$ is asymptotically an exponential in $u_{\rm out}$. Additionally, if the shell has positive mass and is in continual in-fall, then light rays become more dispersed in time after travelling through it and escaping. This implies that the asymptotic relation between the coordinates must be of the type $du_{\rm in}/du_{\rm out}\sim e^{-ku_{\rm out}}$, with $k>0$. Integrating this relation, we see that $u_{\rm out}$ reaches infinity for a finite value of $u_{\rm in}$, i.e. the outgoing light rays need to be trapped inside a finite spatial region after some moment. The dynamics of an infinitesimally thin shell cannot trap light rays in such a way without also forming a horizon in finite time, so a non-zero asymptotic flux of field particles is only obtained when the collapsing shell forms a proper black hole. In that case the relation between the $in$ and $out$ coordinates becomes \begin{equation} \frac{du_{\rm in}}{du_{\rm out}}\to (const.)\,e^{-u_{\rm out}/2r_{\rm s}} \end{equation} at large $u_{\rm out}$'s, which gives the Hawking temperature result $\kappa_{u_{\rm in}}^{u_{\rm out}}=1/2r_{\rm s}$ previously mentioned. If the Schwarzschild radius is only reached asymptotically, no light ray ever gets trapped.\par So from this result it may seem that if a distribution of matter approaches the formation of a horizon only asymptotically, then the exterior vacuum polarisation would always tend to its values in the Boulware vacuum. However, we cannot generalise the result obtained for an infinitesimally thin shell in such a way. Very often, shells provide a good and simple model for collapse scenarios, giving quite similar results to more realistic models of matter, for instance making any shell a thick one. But when studying asymptotic results in the vicinity of a horizon, the effect of even the tiniest width for the shell can change the result entirely, as we will see below. This can be seen as an interesting illustration of the idea that horizons can act as a magnifying glass of high energy physics (in this case represented by the detailed structure inside a thin shell). \subsection{Light-ray trapping without horizon formation} Essentially, the reason why the result is different when a finite-volume matter-filled region is introduced (instead of a thin shell) is because the resulting geometry \emph{can} trap light rays inside a finite spatial region without forming an apparent horizon in finite time, only tending to its formation asymptotically in time. More specifically, in such a case outgoing light-ray trajectories would remain confined inside the Schwarzschild radius for an arbitrarily long, or even infinite period of time (as measured by the regular coordinate $v_{in}$), tending to escape only asymptotically in time and thus never doing so. Fig.~\ref{f11-1} shows a conformal diagram of this type of spacetime, for both the case in which light rays eventually escape as well as the case in which they do not. In terms of the expansion of these null geodesics, this situation would be characterised by an expansion which tends to zero at the Schwarzschild radius $r=r_{\rm s}$ only asymptotically in time, as opposed to the standard black-hole formation scenario, in which it becomes zero after a finite time. This confined state of the light rays at least gives the ETF the possibility of having a finite asymptotic value. Whether that cuts off the growing values of the RSET as the Boulware divergence is approached is another matter still.\par To clarify, if this light-trapping behaviour were to be maintained asymptotically, even if no apparent horizon were formed in finite time, the null surface described by the first trapped radial ray would in fact become an event horizon. However, by manipulating the geometry further one could stop this asymptotic tendency at any time and let the trapped light rays out of their spatial confinement. The interesting thing is that before one does so, a Hawking-like flux of radiation can be maintained for an arbitrarily long period of time, without the need to form any sort of horizon. On the other hand, if this (quasi)thermal flux were slightly different from the case of Hawking radiation (or absent altogether, as in the thin shell case), the RSET would again tend to a divergence at the Schwarzschild radius.\par \begin{figure} \centering \includegraphics[scale=.9]{f11-1} \caption{Conformal diagrams of two spacetimes in which light rays are trapped inside a finite spatial region. The curves indicate surfaces of constant radius. The dash-dotted curve is the Schwarzschild radius $r=r_{\rm s}$. The diagram on the left represents confinement which only lasts a finite time, without the formation of an event horizon. The diagram on the right represents confinement which lasts all the way to the asymptotic future null region, forming an event horizon. These two cases have an initial region of identical semiclassical (and classical) behaviour. In the second case an inner Cauchy horizon may also form, resulting in an extendibility of the geometry analogous to that of an extremal charged black hole.} \label{f11-1} \end{figure} To see when each of these possible outcomes actually takes place, we will present and categorise a large family of spherically-symmetric geometries which trap light rays while having only an asymptotic tendency to form a horizon. For the complete picture, consider a spacetime given by an exterior patch of a Schwarzschild geometry and an interior patch with an arbitrary (spherically-symmetric) distribution of matter, given by the line element in advanced Eddington-Finkelstein coordinates \begin{equation}\label{29} ds^2=-f(v,r)dv^2+2y(v,r)dvdr+r^2d\Omega^2, \end{equation} where $f$ and $y$ are arbitrary functions which depend on the characteristics of the matter content. The two regions are separated by an in-moving spherical surface located at a radius $R(v)$. For convenience we will define the coordinate $d\equiv r-r_{\rm s}$, which is just the radial distance from the Schwarzschild radius. Outgoing radial light rays in the interior part of the geometry will follow the trajectories given by the differential equation \begin{equation}\label{30} d'(v)=\frac{1}{2}\frac{f(v,d+r_{\rm s})}{y(v,d+r_{\rm s})}, \end{equation} where $'$ denotes the derivative with respect to $v$. For the exterior geometry $y(v,r)=1$ and $f(v,r)=f(r)=1-r_{\rm s}/r$. We will define the generalised redshift function in both regions as \begin{equation} F(v,r)\equiv\begin{cases} 1-\frac{r_{\rm s}}{r},&r> R(v),\\ \frac{f(v,r)}{y(v,r)},&r\le R(v). \end{cases} \end{equation} In the absence of an apparent horizon, $F$ will be positive everywhere. At the interface $r=R$ we will assume that it is at least continuous, and that there its value is a minimum of the function in the radial direction, which tends to zero asymptotically in the temporal direction. With this setup, it turns out that whether or not light rays get trapped in the interior region depends only on $F$ at $R$ and its first non-zero spatial derivative on the interior side. Thus, it is completely independent of the exterior geometry, so we can afford to be a bit lax with the matching conditions and only require continuity of the metric for now.\par We can expand the generalised redshift function $F$ in a power series in the coordinate $d$ around the curve $d_R(v)\equiv R(v)-r_{\rm s}$ (approaching from the inside, and assuming analyticity there) and write eq. \eqref{30} as \begin{equation}\label{32} d'(v)=\frac{1}{2}\frac{d_R(v)}{r_{\rm s}+d_R(v)}+k_1\left[d_R(v)-d(v)\right]+k_2\left[d_R(v)-d(v)\right]^2+\cdots, \end{equation} where the first term ensures continuity with the metric in the exterior Schwarzschild region. The coefficients $k_i$ can, in principle, also be variable in time, but we will focus on cases in which the first non-vanishing one remains constant (or sufficiently close) at large times. Since we will only focus on asymptotic solutions, its possible early-time variability and the values of the higher order coefficients will not be relevant.\par We will define three categories for the possible functions $d_R(v)$, covering all monotonous asymptotic approximations to the $d=0$ surface (the Schwarzschild radius). Then we will see the most general conditions the coefficients $k_i$ must satisfy in each case for light rays to get trapped. \begin{figure} \centering \includegraphics[scale=.6]{f12} \caption{Qualitative plot of different possible redshift functions $F$ at radii around the surface $R(v)$ at some instant $v=const.$ during the collapse.} \label{f12} \end{figure} \paragraph{Sub-exponential approach:} The first type of surface trajectory we consider is an approach to the Schwarzschild radius with a distance which decreases as the inverse of a polynomial, \begin{equation} d_R(v)=r_{\rm s}\left(\frac{r_{\rm s}}{v}\right)^n,\qquad \text{with }n\text{ real and positive}. \end{equation} Let us call $m$ the degree of the first non-zero coefficient of the series expansion \eqref{32}, i.e. \begin{equation}\label{34} d'(v)=\frac{1}{2}\frac{1}{1+(v/r_{\rm s})^n}+k_m\left[r_{\rm s}\left(\frac{r_{\rm s}}{v}\right)^n-d(v)\right]^m+\cdots. \end{equation} We have said that the redshift function $F$ has a minimum at $d_R$, so $k_m>0$. The value of $m$ can be thought of as a measure of the width of this minimum (on the inside), as the larger it is, the smoother the function becomes around $d_R$. In the limit $m\to\infty$ it becomes constant in $d$ at equal times, making its approach to zero extend to all points of the interior region. Each of these radial points would then mark an asymptotic marginally trapped surface.\par In order to look for trapped solutions (remaining inside $d<0$ but close to $d_R$ at large times), we have to make some assumption about their asymptotic behaviour. If we assume that the $(-d)^m$ term dominates on the rhs of \eqref{34}, we obtain that if \begin{equation}\label{35} m-1>\frac{1}{n-1} \end{equation} is satisfied, there are trapped asymptotic solutions of the type \begin{equation}\label{36} d\sim -\frac{1}{(k_m(m-1))^{\frac{1}{m-1}}}\frac{1}{(v-c)^{\frac{1}{m-1}}}, \end{equation} where $c$ is an integration constant. In fact, for these solutions $m$ can also be any real number greater than 1. On the other hand, if we assume that the terms with $1/v^n$ dominate, under the same condition \eqref{35} we obtain another trapped solution \begin{equation} d\sim -\frac{1}{2(n-1)}\frac{r_{\rm s}^n}{v^{n-1}}, \end{equation} which, compared to the previous solutions through the inequality \eqref{35} can be seen to be asymptotically closer to $d=0$, and therefore corresponds to the first trapped light ray. All light rays passing through the distribution of matter in a radial direction after the one which has the above asymptotic solution become trapped inside. \begin{figure} \centering \includegraphics[scale=.5]{f13} \caption{Region of the space of parameters $m, n$ which satisfies the inequality \eqref{35}, allowing for light rays to be trapped inside the asymptotic horizon.} \label{f13} \end{figure} \paragraph{Exponential approach:} We now consider a surface trajectory of the type \begin{equation}\label{38} d_R(v)=r_{\rm s}e^{-\gamma v},\qquad\text{with }\gamma\text{ real and positive}. \end{equation} In this case we have the same differential equation \eqref{34}, only with exponentials instead of polynomials. If the degree $m$ of the first non-zero term in the expansion is strictly greater than 1, then the solution which is asymptotically below $d=0$ but gets closest to it, i.e. the first trapped ray, can be obtained by assuming that the rhs of the equation is dominated by terms of order $e^{-\gamma v}$. The result is \begin{equation} d\sim -\frac{r_{\rm s}}{2\gamma}e^{-\gamma v}. \end{equation} On the other hand, if $m=1$, then asymptotically we must consider the terms with $e^{-\gamma v}$ and $d(v)$ in the differential equation. Then it turns out that there are asymptotic trapped solutions only if \begin{equation} \gamma>k_1. \end{equation} Their expressions up to order $e^{-\gamma v}$ are \begin{equation}\label{41} d\sim ce^{-k_1v}-\frac{1}{2}\frac{1+2k_1}{\gamma-k_1}e^{-\gamma v}, \end{equation} where again $c$ is an integration constant. The first trapped light ray corresponds to $c=0$, while subsequent trapped ones correspond to values $c<0$. The limit $c\to 0^+$ corresponds to the last escaping ray. \paragraph{Super-exponential approach:} If $d_R(v)$ approaches zero faster than an exponential (e.g. a Gaussian), then light rays are trapped for any $m\ge1$. The first trapped solution is asymptotically proportional to the integral of $d_R(v)$ (the error function for a Gaussian). \subsection{Asymptotic temperature} Having seen a quite general family of geometries for which escaping light rays do get trapped, we can now connect them with the outside and check how semiclassical effects behave around the Schwarzschild radius, where an apparent horizon is formed asymptotically. At first, we will allow a discontinuity in the first derivatives of the metric components at the surface $R$ and see what values the ETF takes. After that we will consider a case in which the transition is smoothed out (with zero spatial derivatives for $F$ on both sides of its minimum, making it behave like the redshift function in an extremal charged black-hole formation process) and see how the result changes.\par Let us first trace the trajectories of the light rays in the exterior geometry, from the moment in which they cross the surface located at $R$ (at the escape time $v_{\rm et}$) until they reach future null infinity, and explain how the ETF is calculated. To label the specific light rays at future infinity, we will use the parameter $v_\infty$ given by the origin of the asymptotic straight line which the ray tends to follow, as can be seen by looking at fig.~\ref{f14}. Integrating the outgoing null geodesic equation in the Schwarzschild region, we can obtain this parameter as a function of the surface point, \begin{equation}\label{42} v_\infty=v_{\rm et}-2R(v_{\rm et})-2r_{\rm s}\log[R(v_{\rm et})-r_{\rm s}]. \end{equation} When a horizon is formed in finite regular time (and remains present forever), $v_\infty$ diverges as the argument of the logarithm tends to zero, while the rest of the terms $v_{\rm et}-2R(v_{\rm et})$ remain finite and negligible. Then, \[R(v_{\rm et})-r_{\rm s}\sim e^{-v_\infty/2r_{\rm s}},\] and the ETF $\kappa_{u_{in}}^{u_{out}}=1/2r_{\rm s}$ is simply the multiplicative constant in this exponential when the other term (in this case $R(v_{\rm et})-r_{\rm s}$) tends to a simple zero. In this case we see that the internal structure of the collapsing matter distribution is not in any way reflected in the asymptotic ETF, hence why the Hawking temperature of black holes depends only on their Schwarzschild radius.\par \begin{figure} \centering \includegraphics[scale=.55]{f14} \caption{The trajectory of an outgoing light ray which is emitted at $r=0$, escapes the surface of the matter distribution $R$ and reaches infinity, tending to a straight-line trajectory.} \label{f14} \end{figure} On the other hand, when a horizon is formed asymptotically, both the logarithm and $v_{\rm et}$ diverge on the rhs of \eqref{42}. If they both diverge logarithmically, then the ETF is modified and can now reflect some of the characteristics of the interior geometric structure. If either of them diverges hyperbolically, then the ETF shuts down altogether, tending to zero roughly as $1/v_\infty$.\par This last behaviour is precisely what occurs in the case for the sub-exponential approach of the surface to the Schwarzschild radius, in which $v_{\rm et}$ in \eqref{42} diverges hyperbolically. When the ETF tends to zero, the RSET on the surface approaches its Boulware divergence, as was the case for the infinitesimal shell.\par For the case of the exponential approach, the result is the first of the above-mentioned: both terms in \eqref{42} diverge logarithmically. For $k_1\neq0$ (and $\gamma>k_1$, as is necessary for light rays to be trapped), the contribution of each divergence (in the same order as in \eqref{42}) is \begin{equation}\label{43} v_\infty\sim-\frac{1}{\gamma-k_1}\log(\epsilon)-\frac{2\gamma r_{\rm s}}{\gamma-k_1}\log(\epsilon),\qquad\epsilon\to 0, \end{equation} where this last limit means an approach of the arguments of both logarithms to a simple zero. The asymptotic ETF is then \begin{equation} \kappa_{u_{\rm in}}^{u_{\rm out}}=\frac{\gamma-k_1}{2\gamma r_{\rm s}+1}, \end{equation} which is always less than the Hawking temperature, approaching it only in the $\gamma\to\infty$ limit (an infinitely quick collapse). If $k_1=0$, the result is the same as the above taking $k_1\to 0$, also making the lower bound on the possible values of $\gamma$ zero.\par Lastly, for the case of the super-exponential approach to the Schwarzschild radius, the dominant divergence in \eqref{42} is a logarithmic one with the same coefficient as in the case of horizon formation in finite time, leading to an asymptotic Hawking temperature with the same value. As an example, let us consider a surface trajectory of the type \begin{equation} d_R=r_{\rm s}e^{-(\gamma v)^n},\qquad \text{with }n\ge 2. \end{equation} Then the diverging terms in $v_\infty$ corresponding to $v_{\rm et}$ and the logarithm are respectively \begin{equation} \frac{1}{\gamma}\left[\log(\epsilon)\right]^{1/n}-2r_{\rm s}\log(\epsilon),\qquad \epsilon\to 0, \end{equation} so the dominant divergence is simply the same logarithm as in the Hawking case.\par We can therefore say that if a collapse is sufficiently quick, then, as far as long-term semiclassical effects above the horizon are concerned, there is no difference from the case of the formation of a black hole in finite time: the Boulware divergence is canceled out and there is a flux of Hawking radiation. On the other hand, if the collapse is slower, then the asymptotic ETF decreases according to the speed of collapse and to one particular characteristic of the internal structure: the first spatial derivative of $F$ at the surface. We can say that anything further in this region remains invisible to the ETF, just as the whole structure was invisible for a quicker collapse. For a sufficiently slow collapse, when the ETF becomes zero, the internal structure again becomes hidden asymptotically, only this time the semiclassical effects become indistinguishable not from the case of a dynamic black-hole formation, but from the case of a static black hole. \subsection{A smooth transition: the extremal black hole} Up to this point we have considered a generalised redshift function $F$ with a minimum which has a discontinuity in the first derivative, as is seen in fig.~\ref{f12}. The slope on the outside has always been finite, given by the derivative of the Schwarzschild redshift function, and has appeared implicitly in the calculations through the quantity $r_{\rm s}$. For the slope on the inside we have analysed the cases in which it may be finite or zero.\par We will now explore the case in which both the slope on the inside and on the outside of the minimum may be zero. Particularly, we will modify the external static geometry being revealed beyond the surface $R(v)$ in such a way that the slope (and subsequent derivatives) on the outside can have an arbitrary value at horizon formation, maintaining however an asymptotically Schwarzschild structure, as shown in fig.~\ref{f15}. We are going to show that, in accordance with the results in \cite{BLSV06}, the asymptotic ETF will be zero if the slope on the outside is zero, even for an exponential approach of $R$ to the horizon.\par \begin{figure} \centering \includegraphics[scale=.7]{f15} \caption{Generalised redshift function $F(v,r)$ with a minimum at the interface between the interior and exterior geometry, located at the point $R(v)$. For the interior geometry three sections of constant time are represented, $v_1<v_2<v_3$. The section at $v_3$ represents the behaviour at times close to infinity. The exterior geometry is static. When $R(v)$ approaches $r_{\rm s}$, the function is continuous but has an otherwise arbitrary behaviour at both sides of the minimum. Far away from $r_{\rm s}$ it transitions into the Schwarzschild redshift function.} \label{f15} \end{figure} To show this, let us trace the trajectories of light rays in this new exterior geometry, from the moment in which they cross $R$ (at the escape time $v_{\rm et}$). For the region close to $r_{\rm s}$, we can write the differential equation which governs their movement as a power series in their distance $d(v)$ from this point (in the same way we expanded it in the distance $d_R(v)$ in \eqref{32} for the interior geometry), \begin{equation} d'(v)=\frac{1}{2}F(v,r_{\rm s}+d(v))=k_1d(v)+k_2d(v)^2+\cdots. \end{equation} If the exterior geometry is static, then the coefficients $k_i$ are constant, although for our purposes we will only need the first non-zero one to be asymptotically constant, similarly to our previous calculations for the interior region. Let us call this first non-zero coefficient $k_m$, so that the light rays close to $d=0$ will move according to \begin{equation} d'(v)\simeq k_md(v)^m. \end{equation} For a Schwarzschild exterior $k_m=k_1=1/(2r_{\rm s})$. In general, if $m=1$ then the solutions of the above equation are \begin{equation} d(v)\simeq d_R(v_{\rm et})e^{k_1(v-v_{\rm et})} \end{equation} for rays crossing the surface at $d_R(v_{\rm et})=R(v_{\rm et})-r_{\rm s}$. On the other hand, for $m>1$ the solutions are \begin{equation} d(v)\simeq\frac{1}{\left[d_R(v_{\rm et})^{-(m-1)}-k_m(m-1)(v-v_{\rm et})\right]^{\frac{1}{m-1}}}. \end{equation}\par When the exterior geometry was Schwarzschild, we calculated light-ray dispersion through the variation in the quantity $v_\infty$, defined from the integration of null trajectories through the whole exterior region, up to infinity. We did so because this parameter offered the most obvious relation with the label $u_{out}$ which is used to define the $out$ vacuum state (they are in fact proportional to each other). However, to calculate the ETF we only need to study the divergent part of the dispersion, which occurs long before the light rays reach infinity. In fact, for the behaviour of the ETF at large times we only need to trace their trajectories up to an arbitrarily small distance $\varepsilon$ away from the surface where the horizon forms asymptotically, i.e. up to $r_{\rm s}+\varepsilon$.\par We will define a new parameter $v_\varepsilon$ as the moment light rays cross the $r_{\rm s}+\varepsilon$ surface (which is always outside the surface $R(v)$ at large enough times), as shown in fig.~\ref{f16}. This parameter will take the place of $v_\infty$ in the study of the divergence in the dispersion of the trajectories of light rays which get arbitrarily close to the first trapped one. For the above solutions with $m=1$ we have \begin{equation} v_\varepsilon\simeq v_{\rm et}+\frac{1}{k_1}\log(\frac{\varepsilon}{d_R(v_{\rm et})})\sim v_{\rm et}-\frac{1}{k_1}\log[d_R(v_{\rm et})], \end{equation} where the second relation shows that at large times, so long as $\varepsilon$ is finite, the value of $v_\varepsilon$ is in fact independent from this distance parameter. Approaching the last escaping light rays we have $d_R(v_{\rm et})\to 0$ and $v_{\rm et}\to \infty$. From the results in the previous subsection, we know that if the divergence in $v_{\rm et}$ is logarithmic or quicker, then the ETF has a finite asymptotic value. On the other hand, if $v_{\rm et}$ diverges more slowly, then the ETF shuts down.\par \begin{figure} \centering \includegraphics[scale=.5]{f16} \caption{The path of a light ray which escapes from the surface $R$ at time $v_{\rm et}$ and crosses $r_{\rm s}+\varepsilon$ at time $v_\varepsilon$.} \label{f16} \end{figure} From the solutions with $m>1$ we obtain \begin{equation} v_\varepsilon\sim v_{\rm et}+\frac{1}{k_m(m-1)d_R(v_{\rm et})^{m-1}}. \end{equation} In this case, no matter how quickly $v_{\rm et}$ diverges, the parameter $v_\varepsilon$ always has a dominant hyperbolic divergence, making the ETF shut down asymptotically in time as $1/v_\varepsilon$.\par This result shows that if this slope of $F$ on the outside is asymptotically zero, then the ETF tends to zero even for an arbitrarily quick collapse. Consequently, the RSET in the $in$ vacuum tends to its values in the Boulware vacuum, approaching a divergence as $F$ tends to zero. This case should not be confused with the one of the thin shell approaching the Schwarzschild radius asymptotically, as although the ETF and RSET have the same asymptotic behaviour, light rays behave quite differently (in one case they get trapped and in the other they do not).\par A more similar case in which semiclassical effects have the same long-term behaviour is that of a spacetime in which an extremal charged black hole forms in finite time. In it, the generalised redshift function has a smooth minimum (with zero slope on both sides) which, unlike in our model, reaches zero in finite time (and stays at zero from then on). In this case there is again an absence of an asymptotic flux of particles at future null infinity \cite{Hiscock1977,Liberati2000}. \section{Horizon formation at different velocities}\label{s6} Up to this point we have analysed the consequences of staying in the region of large gradient in fig.~\ref{f5} outside the Schwarzschild radius, but approaching it asymptotically. We have also considered a variation of this problem involving a more general distribution of matter. The final step in our study is to see exactly what happens when the Schwarzschild radius \emph{is} crossed in finite null time. Particularly, we have already shown for this case that the asymptotic ETF is always $1/2r_{\rm s}$, and it is a well-known fact that this provides an additional term in the RSET with respect to its Boulware vacuum value that precisely regularises the divergence at the horizon \cite{DFU}.\par In this section we will be interested in semiclassical effects produced in a finite time interval around the formation of a horizon by a shell collapsing \textit{at different velocities lower than the speed of light} at the moment of crossing the horizon. Past studies in this direction, although detailed, have usually involved only a shell collapsing at light-speed (e.g. \cite{SC2014}), justified by the fact that during astrophysical black-hole formation, the velocity of falling matter is expected to be high when crossing the horizon. By contrast, as mentioned earlier, the goal of this work is to thoroughly study the semiclassical effects produced in more general dynamical situations.\par Since in this case the asymptotic solutions for both the ETF and RSET are known, we will be more interested in short-term dynamical effects. At horizon formation, large values of the RSET are to be expected if the $in$ vacuum approximates the static Boulware vacuum in some way (say, in the case of a very slow collapse). Therefore, it is at the horizon itself where we might expect the most clear estimate of how large semiclassical effects can become. We will thus be interested in obtaining the total values of the RSET components there. To give them a more physical interpretation, we will also calculate the corresponding values of the vacuum energy density and the radial pressure measured by free-falling observers. \subsection{Conformal factor at the horizon} In order to obtain the values of the RSET in the $in$ vacuum, we need to calculate the conformal factor $C(u_{in},v_{in})$ which allows us to write the part of the metric \eqref{21} restricted to the time-radius subspace, in the $u_{in},v_{in}$ null coordinates, \begin{equation} ds^2_{(2)}=-C(u_{in},v_{in})du_{in}dv_{in}. \end{equation} Just as a reminder, the $in$ vacuum state of the dimensionally reduced problem is defined by the plane waves in the asymptotic Minkowski region at past null infinity, which is entirely within the asymptotic region of the exterior Schwarzschild geometry if the shell never reaches the speed of light in the past. The ingoing modes, labelled by $v_{in}$, either fall directly into the singularity or are reflected at the origin $r_-=0$ and from there either escape before the formation of the horizon and reach future null infinity, or fall into the singularity. If they reach future null infinity, they can be labelled by the coordinate $u_{out}$, the value of which is a function of the previous label $v_{in}$. Any point in the geometry outside the event horizon (both on the exterior and interior of the shell) can be labelled by a pair $(u_{out}, v_{in})$. In the notation introduced in eqs. \eqref{1} and \eqref{3}, $v_{in}$ is simply $v_+$ and $u_{out}$ is $u_+$. The dispersion of the light rays between past and future null infinity is given by \begin{equation} \frac{du_{out}}{dv_{in}}=\left.\frac{du_+}{du_-}\frac{dv_-}{dv_+}\right|_{v_-=u_-}=\frac{g(u_-)}{h(u_-)}, \end{equation} where we have made use of the relation $du_-=dv_-$ for the reflection of light rays at the origin, and where $u_-$ is a function of $v_{in}$ through the inverse of the integral of $h(u_-)$. Studying the values of $g(u_-)$ and $h(v_-)$, defined in \eqref{6}, from $-\infty$ until the formation of the horizon for different trajectories of collapse, one can see that $h$ is of order one throughout. On the other hand, $g$ always has a divergence at the horizon since $u_+$ reaches an infinite value while $u_-$ is still finite. The contrast in this behaviour implies that the approximation \begin{equation} \frac{dv_-}{dv_{in}}\simeq 1, \end{equation} that is, the approximation of considering our $v_{in}$ coordinate as the Minkowski $v_-$, captures the relevant physical effects produced in the dynamics around the formation of the horizon. It is easy to check that introducing a function $h$ which is different from 1, but of the same order of magnitude, would not change the general aspects of the results. This approximation, apart from simplifying the calculations which follow, also allows us to fix the trajectory of the shell only in an arbitrarily small region around the point of horizon-crossing.\par From this point on we will drop the subscripts from the two null coordinates we will use for the most part: $v\equiv v_+$ and $u\equiv u_-$ (we will not use $u_{out}$ since it is divergent at the horizon). Also, we will mostly use the radial coordinate in the exterior region, so $r$ will always refer to $r_+$.\par From equations \eqref{3} and \eqref{6} we see that the conformal factor of the dimensionally reduced geometry as a function of $u$ and $v$ is \begin{equation} C(u,v)=|f(r(u,v))|g(u). \end{equation} Since we are interested in calculating the RSET at the horizon, where large values might be expected for it, we must evaluate the above quantity and at least its first two derivatives there. A minor inconvenience in that process is the fact that the explicit form of $r(u,v)$ is not generally available, and numerical calculations cannot be relied upon either, since at the horizon $f$ is zero and $g$ diverges. To handle this difficulty, we will use an expansion for $r(u,v)$ around the line corresponding to the horizon, where $u=u_{\rm h}=const.$, \begin{equation}\label{8q} r(u,v)=q_0(v)+q_1(v)(u-u_{\rm h})+\frac{1}{2}q_2(v)(u-u_{\rm h})^2+\cdots, \end{equation} where $q_i$ is the $i$-th derivative of $r$ with respect to $u$ evaluated at $u_{\rm h}$, namely, $q_i=\partial^i r/\partial u^i|_{u=u_{\rm h}}$. In order to calculate the RSET components, we will need up to second derivatives of the conformal factor in $u$. To evaluate them we must use the expansion of $r(u,v)$ in $u$ up to third order, due to the $1/(u-u_{\rm h})$ divergence generally present in $g(u)$. This means that we need only $q_0,\dots,q_3$.\par Let us now see how to calculate these coefficients. The lowest order one $q_0(v)$ is just the value of $r$ at the horizon, namely, the constant $r_{\rm s}=2M$. The rest of them can be obtained through the relations \begin{equation}\label{6q} \frac{\partial r}{\partial u}=-\frac{1}{2}g(u)f(r),\qquad \frac{\partial r}{\partial v}=\frac{1}{2}f(r), \end{equation} as we show in the following. The first of these equations evaluated at $u_{\rm h}$ gives $q_1(v)$, but its rhs is just as difficult to evaluate as the conformal factor itself. However, we can make use of the second equation to write the cross-derivative \begin{equation}\label{7q} \frac{\partial}{\partial{v}}\frac{\partial{r}}{\partial u}=-\frac{1}{2}g(u)f'(r)\frac{\partial r}{\partial v}=-\frac{1}{2}g(u)f'(r)\frac{1}{2}f(r)=\frac{1}{2}f'(r)\frac{\partial r}{\partial u}. \end{equation} Taking into account that $f'(r)$ evaluated at the horizon is just $1/r_{\rm s}$, the evaluation of this equation at $u_{\rm h}$ gives us a first order differential equation for $q_1(v)$, namely $q_1'(v)=q_1(v)/(2r_{\rm s})$. Using this method recursively allows us to write analogous equations for all the coefficients $q_i(v)$ in \eqref{8q}. For the ones relevant to our calculation of the RSET we obtain \begin{equation}\label{17q} \begin{split} &q_1'(v)=\frac{1}{2r_{\rm s}}q_1(v),\\ &q_2'(v)=\frac{1}{2r_{\rm s}}q_2(v)-\frac{1}{r_{\rm s}^2}q_1^2(v),\\ &q_3'(v)=\frac{1}{2r_{\rm s}}q_3(v)-\frac{3}{r_{\rm s}^2}q_2(v)q_1(v)+\frac{3}{r_{\rm s}^3}q_1^3(v). \end{split} \end{equation} Initial conditions for these equations can be found by fixing the zero of the $v$ coordinate at the point of horizon formation, and considering the relation $r_+=r_-$ at the surface of the shell.\par For a shell which crosses the horizon with an approximately constant radial velocity as seen from the inside ($\alpha_-=dv_-/du_-\simeq const.$), from equations \eqref{2} and \eqref{5} we get the relation \begin{equation}\label{28q} r_-\simeq r_{\rm s}+\frac{\alpha_--1}{2}(u-u_{\rm h}) \end{equation} at the shell surface, which gives us the initial conditions $q_1(0)=(\alpha_--1)/2$, $q_2(0)=0$ and $q_3(0)=0$ (these last two are approximate if $\alpha_-$ is only approximately constant, but the important aspects of our final results do not change if they have different values). We solve the above equations to get \begin{equation}\label{29q} \begin{split} &q_1(v)=-\frac{1-\alpha_-}{2}e^{v/2r_{\rm s}},\\ &q_2(v)=\frac{(1-\alpha_-)^2}{2r_{\rm s}}e^{v/2r_{\rm s}}(1-e^{v/2r_{\rm s}}),\\ &q_3(v)=-\frac{3(1-\alpha_-)^3}{8r_{\rm s}^2}e^{v/2r_{\rm s}}(1-4e^{v/2r_{\rm s}}+3e^{v/r_{\rm s}}). \end{split} \end{equation} \subsection{RSET evaluated at the horizon for the ``in" vacuum} We now have everything prepared to calculate the RSET components at the horizon. Substituting the solutions \eqref{29q} into the series expansion \eqref{8q}, we see how $f$ depends on $u$ and $v$ up to third order in $(u-u_{\rm h})$. As for resolving the dependence of $g$ in $r$ (which appears through $\alpha_+$), we must remember the definition of this function \eqref{6} which tells us that it is evaluated at the shell surface. Therefore, close to the horizon, we can simply use the expression for $r$ given in \eqref{28q}. With these functions we can obtain $C(u,v)$ up to second order in $(u-u_{\rm h})$ (remember $g$ has a leading term $1/(u-u_{\rm h})$), \begin{equation} \begin{split} C(u,v)&=(1-\alpha_-)e^{v/2r_{\rm s}}+\left[\left(-\alpha_-^2+\frac{3}{2}\alpha_--1\right)e^{v/2r_{\rm s}}+(1-\alpha_-)^2e^{v/r_{\rm s}}\right]\frac{u-u_{\rm h}}{r_{\rm s}}\\&+\frac{e^{v/2r_{\rm s}}}{8(1-\alpha_-)}\left[3-10\alpha_-+12\alpha_-^2-10\alpha_-^3+3\alpha_-^4\right.\\&\left.-4(1-\alpha_-)^2(3-5\alpha_-+3\alpha_-^2)e^{v/2r_{\rm s}}+9(1-\alpha_-)^4e^{v/r_{\rm s}}\right]\frac{(u-u_{\rm h})^2}{r_{\rm s}^2}+\cdots. \end{split} \end{equation} Finally, we can use \eqref{23} to obtain the components of the RSET at the horizon: \begin{subequations}\label{31q} \begin{align} \begin{split} \expval{T_{uu}}&=\frac{1}{24\pi r_{\rm s}^2}\left(\frac{-6\alpha_-^4+16\alpha_-^3-27\alpha_-^2-16\alpha_--6}{8(1-\alpha_-)^2}\right.\\&\hspace{35mm}\left.+\frac{\alpha_-}{2}e^{v/2r_{\rm s}}+\frac{3}{4}(1-\alpha_-)^2e^{v/r_{\rm s}}\right), \end{split}\label{31a}\\ \expval{T_{uv}}&=-\frac{1}{24\pi r_{\rm s}^2}\frac{1-\alpha_-}{2}e^{v/2r_{\rm s}},\\ \expval{T_{vv}}&=-\frac{1}{24\pi r_{\rm s}^2}\frac{1}{8}. \end{align} \end{subequations} Their behaviour can be read easily, except perhaps for the first constant term in the parenthesis in $\expval{T_{uu}}$, which has been plotted as a function of $\alpha_-$ in fig.~\ref{f17}. The following observations can be made: \begin{itemize} \item Firstly, the components seem to grow exponentially on the horizon as time passes. This, however, turns out to be a consequence of the coordinate system in which they are expressed. In a system more appropriate for the static Schwarzschild region, say the Eddington-Finkelstein advanced coordinates $(v,r)$, this behaviour is suppressed by factors of $1/C$ arising from the relation $\partial u/\partial r$. A more detailed analysis of the energy density and flux perceived by a free-falling observer will follow shortly. \item The second thing one might notice is that $\expval{T_{vv}}$ is constant, and therefore completely independent from the dynamics of the collapse. This is an obvious consequence of the fact that we have chosen the Eddington-Finkelstein $v$ coordinate, which is not affected by the interior Minkowski region. \item Finally, we note that the $\expval{T_{uu}}$ component diverges as $\alpha_-\to 1$, that is, as the collapse becomes slower, approaching the static limit. As we will see, the $1/(1-\alpha_-)^n$ terms are suppressed exponentially in the regular Eddington-Finkelstein coordinates when a long time has passed since the formation of the horizon, but they play an important role near the point of horizon formation. \end{itemize} \begin{figure} \centering \includegraphics[scale=0.7]{f17} \caption{Plot of the constant part in the parenthesis of eq. \eqref{31a} as function of $\alpha_-$ in its domain of possible values. It has negative values throughout and a divergence at $\alpha_-=1$.} \label{f17} \end{figure} \subsection{Energy density, flux and pressure observed by free-falling observer at the horizon} Let us consider the four-velocity $w$ of a free-falling observer in the Schwarzschild geometry expressed in $(u,v)$ coordinates, evaluated at the moment of horizon crossing. It has the form \begin{equation} w^\rho=\left(\sqrt{\frac{2}{\beta_0}}\frac{1}{C},\sqrt{\frac{\beta_0}{2}}\right), \end{equation} where $\beta_0$ is related to the radius $r_0$ from which the free fall was initiated through \begin{equation} \beta_0=\frac{1}{2}\frac{1}{1-r_{\rm s}/r_0}. \end{equation} Let us also introduce the space-like unitary vector perpendicular to this four-velocity and pointing in the outward radial direction, \begin{equation} z^\rho=\left(-\sqrt{\frac{2}{\beta_0}}\frac{1}{C},\sqrt{\frac{\beta_0}{2}}\right). \end{equation} We now define the effective energy density $\rho$, flux $\Phi$ and pressure $p$ perceived by this observer as \begin{subequations}\label{36q} \begin{align} &\rho\equiv\expval{T_{\mu\nu}}w^\mu w^\nu=\frac{2}{\beta_0 C^2}\expval{T_{uu}}+\frac{1}{C}\expval{T_{uv}}+\frac{\beta_0}{2}\expval{T_{vv}},\\ &\Phi\equiv-\expval{T_{\mu\nu}}w^\mu z^\nu=\frac{2}{\beta_0 C^2}\expval{T_{uu}}-\frac{\beta_0}{2}\expval{T_{vv}},\\ &p\equiv\expval{T_{\mu\nu}}z^\mu z^\nu=\frac{2}{\beta_0 C^2}\expval{T_{uu}}-\frac{1}{C}\expval{T_{uv}}+\frac{\beta_0}{2}\expval{T_{vv}}. \end{align} \end{subequations} As an aside, we note that the conformal factor at the horizon, \begin{equation}\label{37q} C(u_{\rm h},v)=(1-\alpha_-)e^{v/2r_{\rm s}}, \end{equation} is not equal to 1 when $v=0$, where the geometry must match with the interior Minkowski region, because we are not using the Minkowski $v_-$ coordinate. If we were, we would have to multiply $C$ by $h=dv_+/dv_-$, which at the point of horizon formation has the value $h=1/(1-\alpha_-)$).\par We thus see that the growing exponentials appearing in eqs. \eqref{31q} do not show up in the scalar quantities in \eqref{36q}. In fact, these turn out to have constant, finite asymptotic values that depend on the initial condition $\beta_0$ of the free falling observer \begin{subequations} \begin{align} &\rho\xrightarrow[v\to\infty]{} \frac{1}{24\pi r_{\rm s}}\left(-\frac{1}{2}-\frac{\beta_0}{16}+\frac{3}{2\beta_0}\right),\\ &\Phi\xrightarrow[v\to\infty]{} \frac{1}{24\pi r_{\rm s}}\left(\frac{\beta_0}{16}+\frac{3}{2\beta_0}\right),\\ &p\xrightarrow[v\to\infty]{} \frac{1}{24\pi r_{\rm s}}\left(\frac{1}{2}-\frac{\beta_0}{16}+\frac{3}{2\beta_0}\right). \end{align} \end{subequations} When these values are approximately reached, the system can be said to have thermalised, as all other terms are suppressed exponentially. A measure of the time it takes to do so, in the $v$ coordinate, for a slow collapse (when $\alpha_-$ is close to 1) is given by the value \begin{equation} v_{\rm therm}= 4r_{\rm s}\log(\frac{1}{1-\alpha_-}). \end{equation} In fig.~\ref{f18} we see plots for $\rho$, $\Phi$ and $p$ for two different values of $\beta_0$, which make the asymptotic values of $\rho$ and $p$ have different signs. Except for the case of extremely small values of $\beta_0$, the asymptotic values of the previous quantities are always negligibly small due to the suppression of the RSET by Planck's constant (which has been omitted in the choice of units). However, this smallness can be compensated during the transient phase of the collapse. Near the point of horizon formation we have $1/(1-\alpha_-)^4$ terms, originating from the $1/(1-\alpha_-)^2$ term in $\expval{T_{uu}}$ in \eqref{31q} and from the $1/C^2$ term evaluated from \eqref{37q}. These terms can be made arbitrarily large if $\alpha_-$ is very close to 1, compensating the suppression by Planck's constant.\par \begin{figure} \centering \includegraphics[scale=0.55]{f181} \includegraphics[scale=0.55]{f182} \includegraphics[scale=0.55]{f183} \caption{Perceived energy density, flux and pressure at the horizon, as a function of the Eddington-Finkelstein $v$ coordinate, for $\beta_0=0.5$ (free fall from $r_0\to\infty$) and for $\beta_0=30$ (free fall from $r_0\simeq1.017r_{\rm s}$), in the Schwarzschild region of a collapse with parameter $\alpha_-=0.9$ ($v_{\rm therm}\simeq9.2r_{\rm s}$). The point $v=0$ marks the formation of the horizon. Immediately after, we observe these function have very large (negative) values. This is a direct consequence of the proximity of the parameter $\alpha_-$ to 1, as discussed in the text. On the other hand, the asymptotic values are always small (except for observers which start their free fall very close to the horizon, where $\beta\to\infty$). The sign of the asymptotic energy density and pressure depend of the velocity of the observer (they are negative for slower observers), while the outgoing flux is always positive, in accordance with the evaporating black-hole scenario.} \label{f18} \end{figure} With these results we see that the RSET approaches a physical divergence in the static limit $\alpha_-\to1$. At the limit itself this divergence is hardly surprising, as for a static shell the $in$ vacuum essentially becomes the Boulware one. What is interesting is the fact that this limit can be approached through the single velocity parameter $\alpha_-$ of the shell when it crosses the horizon, without imposing any conditions on its past evolution. This seems to indicate that during the formation of a black hole, if by some mechanism the collapse of matter were to be slowed down just before it forms a horizon, its subsequent evolution would become a problem which requires a full semiclassical treatment. \section{Summary and conclusions} In the standard picture of a black-hole formation process, the pressure of a star fails to support its structure and matter begins to accelerate in an inward direction, acquiring very high speeds by the time a horizon is formed. In this scenario, the semiclassical theory presents no significant deviations from its classical counterpart \cite{DFU,Parentani1994,Barcelo2008,Unruh2018}. This is true throughout the collapse, except perhaps when the curvature approaches Planckian values, in the final stages before a singularity is formed, although it is not clear whether the semiclassical theory is applicable there at all. However, whether semiclassical effects become important in scenarios involving matter approaching the formation of a horizon in a different manner is less well understood, and worth studying in detail in order to determine the possible self-consistency of models of gravitational collapse beyond general relativity.\par Our study is inspired by the possible behaviour of matter in such situations, covering nonetheless a very large family of geometries due to the many unknowns in their evolution. In section \ref{s4} we begun by analysing the case of an oscillating distribution of matter in the thin-shell approximation, which periodically approaches the formation of a horizon but bounces back just before it is formed. Through the ETF we saw periods of emission of Hawking-like radiation in between the bounces. Although the ETF in these periods was always around the Hawking value $1/2r_{\rm s}$, its particular shape was strongly influenced by how the in-crossing and out-crossing dispersion effects resonated with each other for individual modes. The bounces themselves caused a significant dispersion in the out-crossing modes, which translates into sharp increases of both the ETF and RSET. In general, we saw that semiclassical quantities (which depend on the derivatives of the terms measuring light-ray dispersion) become largest near the Schwarzschild radius at low speeds.\par To further explore this low-velocity regime, in section \ref{s5} we analysed surface trajectories which approach the Schwarzschild radius monotonously, but reach it only asymptotically. In this case we saw that semiclassical effects are very sensitive to the structure of the geometry close to the surface, so we needed to go beyond the thin-shell approximation and use an arbitrary spherically-symmetric geometry for the interior. With minimal assumptions, we showed that the values of the ETF and RSET at large times depend only on a few characteristics of the geometry through one of its degrees of freedom, which we called the generalised redshift function, particularly: the speed at which its minimum approaches zero (i.e. the speed at which the formation of an apparent horizon is approached) and its spatial derivatives on both sides of this minimum. Depending on these quantities, the dynamical $in$ vacuum can behave as in the usual case of black-hole formation in finite time, or it can become similar to the static Boulware vacuum (generally at lower speeds of approach). In the latter case the RSET acquires very large values around the Schwarzschild radius, tending to a divergence asymptotically.\par In section \ref{s6} we went back to the thin-shell approximation, and analysed the case of a trajectory which forms a horizon in finite time. The parameter we were interested in was the speed at which the shell crossed the Schwarzschild radius. We calculated the values of the RSET at the horizon, and the corresponding energy density, flux and pressure perceived by free-falling observers, with a dependence on this speed parameter. We saw that at low speeds these physical quantities can become arbitrarily large (and also stay large for longer at lower speeds), approaching a divergence in the static limit.\par To conclude, we remark that a clear-cut result from all the above situations is that semiclassical back-reaction on the geometry (through the RSET) is a necessary ingredient in analysing any geometry in which matter happens to be moving at very low velocities (much lower than the speed of light) when close to horizon formation. As a purely kinematic exercise, our analysis shows the richness of the situations around the threshold of horizon formation. Beyond that, although no complete dynamical scenario has yet been developed in which matter actually enters such low-velocity regimes, it is important to note that such a possibility is not excluded either. This offers the exploration of alternative scenarios with which to compare the standard black-hole paradigm. \section*{Acknowledgments} Financial support was provided by the Spanish Government through the projects FIS2017-86497-C2-1-P, FIS2017-86497-C2-2-P (with FEDER contribution), FIS2016-78859-P (AEI/FEDER,UE), and by the Junta de Andalucía through the project FQM219. VB is funded by the Spanish Government fellowship FPU17/04471. \section*{References} \nocite{*}
{ "timestamp": "2019-11-07T02:12:34", "yymm": "1904", "arxiv_id": "1904.06558", "language": "en", "url": "https://arxiv.org/abs/1904.06558" }
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{D}{eep} neural networks have recently achieved impressive success in a number of machine learning and pattern recognition tasks and been under intensive research \cite{he2016identity, DBLP:conf/aaai/SzegedyIVA17, 7112511, 7346495, 7258387,wei2016hcp,jin2015deep}. Hierarchical neural networks have been known for decades, and there are a number of essential factors contributing to its recent rising, such as the availability of big data and powerful computational resources. However, arguably the most important contributor to the success of deep neural network is the discovery of efficient training approaches \cite{hinton2006fast,bengio2009learning,6472238,vincent2008extracting,vincent2010stacked}. A particular interesting advance in the training techniques is the invention of Dropout \cite{hinton2012improving}. At the operational level, Dropout adjusts the network evaluation step (feed-forward) at the training stage, where a portion of units are randomly discarded. The effect of this simple trick is impressive. Dropout enhances the generalization performance of neural networks considerably, and is behind many record-holders of widely recognized benchmarks \cite{krizhevsky2012imagenet,DBLP:conf/aaai/SzegedyIVA17,DBLP:conf/bmvc/ZagoruykoK16}. The success has attracted much research attention, and found applications in a wider range of problems \cite{wager2013dropout,chen2014dropout,van2013learning}. Theoretical research from the viewpoint of statistical learning has pointed out the connections between Dropout and model regularization, which is the de facto recipe of reducing over-fitting for complex models in practical machine learning. For example, Wager et al. \cite{wager2013dropout} showed that for a generalized linear model (GLM), Dropout implicitly imposes an adaptive $L_{2}$ regularizer of the network weights through an estimation of the inverse diagonal Fisher information matrix. Sparsity is of vital importance in deep learning. It is straightforward that through removing unimportant weights, deep neural networks perform prediction faster. Additionally, it is expected to obtain better generalization performance and reduce the number of examples needed in the training stage \cite{lecun1989optimal}. Recently much evidence has shown that the accuracy of a trained deep neural network will not be severely affected by removing a majority of connections and many researchers focus on the deep model compression task \cite{DBLP:conf/icml/ChenWTWC15, han2015learning,han2015deep, denil2013predicting,ba2014deep,hinton2015distilling}. One effective way of compression is to train a neural network, prune the connections and fine-tune the weights iteratively \cite{han2015learning,han2015deep}. However, if we can cut the connections naturally via imposing sparsity-inducing penalties in the training process of a deep neural network, the work-flow will be greatly simplified. In this paper, we propose a new regularized deep neural network training approach: Shakeout, which is easy to implement: randomly choosing to enhance or reverse each unit's contribution to the next layer in the training stage. Note that Dropout can be considered as a special \textquotedblleft flat" case of our approach: randomly keeping (enhance factor is $1$) or discarding (reverse factor is $0$) each unit's contribution to the next layer. Shakeout enriches the regularization effect. In theory, we prove that it adaptively combines $L_{0}$, $L_{1}$ and $L_{2}$ regularization terms. $L_{0}$ and $L_{1}$ regularization terms are known as sparsity-inducing penalties. The combination of sparsity-inducing penalty and $L_{2}$ penalty of the model parameters has shown to be effective in statistical learning: the Elastic Net \cite{zou2005regularization} has the desirable properties of producing sparse models while maintaining the grouping effect of the weights of the model. Because of the randomly \textquotedblleft shaking" process and the regularization characteristic pushing network weights to zero, our new approach is named \textquotedblleft Shakeout". As discussed above, it is expected to obtain much sparser weights using Shakeout than using Dropout because of the combination of $L_{0}$ and $L_{1}$ regularization terms induced in the training stage. We apply Shakeout on one-hidden-layer autoencoder and obtain much sparser weights than that resulted by Dropout. To show the regularization effect on the classification tasks, we conduct the experiments on image datasets including MNIST, CIFAR-10 and ImageNet with the representative deep neural network architectures. In our experiments we find that by using Shakeout, the trained deep neural networks always outperform those by using Dropout, especially when the data is scarce. Besides the fact that Shakeout leads to much sparser weights, we also empirically find that it groups the input units of a layer. Due to the induced $L_{0}$ and $L_{1}$ regularization terms, Shakeout can result in the weights reflecting the importance of the connections between units, which is meaningful for conducting compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture. This journal paper extends our previous work \cite{kang2016shakeout} theoretically and experimentally. The main extensions are listed as follows: 1) we derive the analytical formula for the regularizer induced by Shakeout in the context of GLM and prove several important properties; 2) we conduct experiments using Wide Residual Network \cite{DBLP:conf/bmvc/ZagoruykoK16} on CIFAR-10 to show Shakeout outperforms Dropout and standard back-propagation in promoting the generalization performance of a much deeper architecture; 3) we conduct experiments using AlexNet \cite{krizhevsky2012imagenet} on ImageNet dataset with Shakeout and Dropout. Shakeout obtains comparable classification performance to Dropout, but with superior regularization effect; 4) we illustrate that Shakeout can effectively reduce the instability of the training process of the deep architecture. Moreover, we provide a much clearer and detailed description of Shakeout, derive the forward-backward update rule for deep convolutional neural networks with Shakeout, and give several recommendations to help the practitioners make full use of Shakeout. In the rest of the paper, we give a review about the related work in Section 2. Section 3 presents Shakeout in detail, along with theoretical analysis of the regularization effect induced by Shakeout. In Section 4, we first demonstrate the regularization effect of Shakeout on the autoencoder model. The classification experiments on MNIST , CIFAR-10 and ImageNet illustrate that Shakeout outperforms Dropout considering the generalization performance, the regularization effect on the weights, and the stabilization effect on the training process of the deep architecture. Finally, we give some recommendations for the practitioners to make full use of Shakeout. \section{Related Work} Deep neural networks have shown their success in a wide variety of applications. One of the key factors contributes to this success is the creation of powerful training techniques. The representative power of the network becomes stronger as the architecture gets deeper \cite{bengio2009learning}. However, millions of parameters make deep neural networks easily over-fit. Regularization \cite{erhan2010does,wager2013dropout} is an effective way to obtain a model that generalizes well. There exist many approaches to regularize the training of deep neural networks, like weight decay \cite{moody1995simple}, early stopping \cite{prechelt1998automatic}, etc. Shakeout belongs to the family of regularized training techniques. Among these regularization techniques, our work is closely related to Dropout \cite{hinton2012improving}. Many subsequent works were devised to improve the performance of Dropout \cite{wan2013regularization,ba2013adaptive,li2016improved}. The underlying reason why Dropout improves performance has also attracted the interest of many researchers. Evidence has shown that Dropout may work because of its good approximation to model averaging and regularization on the network weights \cite{srivastava2014dropout, warde2013empirical,baldi2013understanding}. Srivastava \cite{srivastava2014dropout} and Warde-Farley \cite{warde2013empirical} exhibited through experiments that the weight scaling approximation is an accurate alternative for the geometric mean over all possible sub-networks. Gal et al. \cite{DBLP:conf/icml/GalG16} claimed that training the deep neural network with Dropout is equivalent to performing variational inference in a deep Gaussian Process. Dropout can also be regarded as a way of adding noise into the neural network. By marginalizing the noise, Srivastava \cite{srivastava2014dropout} proved for linear regression that the deterministic version of Dropout is equivalent to adding an adaptive $L_{2}$ regularization on the weights. Furthermore, Wager \cite{wager2013dropout} extended the conclusion to generalized linear models (GLMs) using a quadratic approximation to the induced regularizer. The inductive bias of Dropout was studied by Helmbold et al. \cite{helmbold2015inductive} to illustrate the properties of the regularizer induced by Dropout further. In terms of implicitly inducing regularizer of the network weights, Shakeout can be viewed as a generalization of Dropout. It enriches the regularization effect of Dropout, i.e. besides the $L_{2}$ regularization term, it also induces the $L_{0}$ and $L_{1}$ regularization terms, which may lead to sparse weights of the model. Due to the implicitly induced $L_{0}$ and $L_{1}$ regularization terms, Shakeout is also related to sparsity-inducing approaches. Olshausen et al. \cite{olshausen1997sparse} introduced the concept of sparsity in computational neuroscience and proposed the sparse coding method in the visual system. In machine learning, the sparsity constraint enables a model to capture the implicit statistical data structure, performs feature selection and regularization, compresses the data at a low loss of the accuracy, and helps us to better understand our models and explain the obtained results. Sparsity is one of the key factors underlying many successful deep neural network architectures \cite{lecun1998gradient,szegedy2015going,szegedy2016rethinking,DBLP:conf/aaai/SzegedyIVA17} and training algorithms \cite{boureau2008sparse}\cite{goodfellow2012spike}. A Convolutional neural network is much sparser than the fully-connected one, which results from the concept of local receptive field \cite{lecun1998gradient}. Sparsity has been a design principle and motivation for Inception-series models \cite{szegedy2015going,szegedy2016rethinking,DBLP:conf/aaai/SzegedyIVA17}. Besides working as the heuristic principle of designing a deep architecture, sparsity often works as a penalty induced to regularize the training process of a deep neural network. There exist two kinds of sparsity penalties in deep neural networks, which lead to the activity sparsity \cite{boureau2008sparse}\cite{goodfellow2012spike} and the connectivity sparsity \cite{thom2013sparse} respectively. The difference between Shakeout and these sparsity-inducing approaches is that for Shakeout, the sparsity is induced through simple stochastic operations rather than manually designed architectures or explicit norm-based penalties. This implicit way enables Shakeout to be easily optimized by stochastic gradient descent (SGD) ${-}$ the representative approach for the optimization of a deep neural network. \section{Shakeout} Shakeout applies on the weights in a linear module. The linear module, i.e. weighted sum, \begin{align} \theta & =\sum_{j=1}^{p}w_{j}x_{j}\label{eq:w-sum} \end{align} is arguably the most widely adopted component in data models. For example, the variables $x_{1}$, $x_{2}$, $\dots$, $x_{p}$ can be input attributes of a model, e.g. the extracted features for a GLM, or the intermediate outputs of earlier processing steps, e.g. the activations of the hidden units in a multilayer artificial neural network. Shakeout \textit{randomly} modifies the computation in Eq. (\ref{eq:w-sum}). Specifically, Shakeout can be realized by randomly modifying the weights \textbf{\textit{Step 1}}: Draw $r_{j}$, where $\begin{cases} P(r_{j}=0) & =\tau\\ P(r_{j}=\frac{1}{1-\tau}) & =1-\tau \end{cases}$ . \textbf{\textit{Step 2}}: Adjust the weight according to $r_{j}$, \begin{multline*} \begin{cases} \tilde{w}_{j}\leftarrow-c s_j, & \ \textrm{if }r_{j}=0\qquad\,\,\,\textrm{\textrm{(A)}}\\ \tilde{w}_{j}\leftarrow(w_{j}+c\tau s_j)/(1-\tau) & \ \textrm{otherwise}\qquad\textrm{(B)} \end{cases} \end{multline*} where $s_{j}=\textrm{sgn}(w_{j})$ takes $\pm1$ depending on the sign of $w_{j}$ or takes 0 if $w_{j}=0$. As shown above, Shakeout chooses (randomly by drawing $r$) between two fundamentally different ways to modify the weights. Modification (A) is to set the weights to constant magnitudes, despite their original values except for the signs (to be opposite to the original ones). Modification (B) updates the weights by a factor $(1-\tau)^{-1}$ and a bias depending on the signs. Note both (A) and (B) preserve zero values of the weights, i.e. if $w_{j}=0$ then $\tilde{w}_{j}=0$ with probability 1. Let $\tilde{\theta}=\tilde{\boldsymbol{w}}^{T}\boldsymbol{x}$, and Shakeout leaves $\theta$ unbiased, i.e. $\mathbb{E}[\tilde{\theta}]=\theta$. The hyper-parameters $\tau\in(0,1)$ and $c\in(0,+\infty)$ configure the property of Shakeout. Shakeout is naturally connected to the widely adopted operation of Dropout \cite{hinton2012improving,srivastava2014dropout}. We will show that Shakeout has regularization effect on model training similar to but beyond what is induced by Dropout. From an operational point of view, Fig. \ref{fig:The-Shakeout-operations} compares Shakeout and Dropout. Note that Shakeout includes Dropout as a special case when the hyper-parameter $c$ in Shakeout is set to zero. \begin{figure}[!t] \centering \includegraphics[bb=240bp 70bp 550bp 600bp,scale=0.3]{sch} \protect\caption{Comparison between Shakeout and Dropout operations. This figure shows how Shakeout and Dropout are applied to the weights in a linear module. In the original linear module, the output is the summation of the inputs $\boldsymbol{x}$ weighted by $\boldsymbol{w}$, while for Dropout and Shakeout, the weights $\boldsymbol{w}$ are first randomly modified. In detail, a random switch $\hat{r}$ controls how each $w$ is modified. The manipulation of $w$ is illustrated within the amplifier icons (the red curves, best seen with colors). The coefficients are $\alpha=1/(1-\tau)$ and $\beta(w)=cs(w)$, where $s(w)$ extracts the sign of $w$ and $c>0$, $\tau\in[0,1]$. Note the sign of $\beta(w)$ is always the same as that of $w$. The magnitudes of coefficients $\alpha$ and $\beta(w)$ are determined by the Shakeout hyper-parameters $\tau$ and $c$. Dropout can be viewed as a special case of Shakeout when $c=0$ because $\beta(w)$ is zero at this circumstance.} \label{fig:The-Shakeout-operations} \end{figure} When applied at the training stage, Shakeout alters the objective $-$ the quantity to be minimized $-$ by adjusting the weights. In particular, we will show that Shakeout (with expectation over the random switch) induces a regularization term effectively penalizing the magnitudes of the weights and leading to sparse weights. Shakeout is an approach designed for helping model training, when the models are trained and deployed, one should relieve the disturbance to allow the model work with its full capacity, i.e. we adopt the resulting network without any modification of the weights at the test stage. \subsection{\label{sub:Shakeout-for-GLM}Regularization Effect of Shakeout} Shakeout randomly modifies the weights in a linear module, and thus can be regarded as injecting noise into each variable $x_{j}$, i.e. $x_{j}$ is randomly scaled by $\gamma_{j}$: $\tilde{x}_{j}=\gamma_{j}x_{j}$. Note that $\gamma_{j}=r_{j}+\frac{c(r_{j}-1)}{|w_j|}$, the modification of $x_{j}$ is actually determined by the random switch ${r_{j}}$. Shakeout randomly chooses to enhance (i.e. when $r_{j}=\frac{1}{1-\tau}$, $\gamma_{j}>\frac{1}{1-\tau}$) or reverse (i.e. when $r_{j}=0$, $\gamma_{j}<0$) each original variable $x_{j}$'s contribution to the output at the training stage (see Fig. \ref{fig:The-Shakeout-operations}). However, the expectation of $\tilde{x}_{j}$ over the noise remains unbiased, i.e. $\mathbb{E}_{r_{j}}[\tilde{x}_{j}]=x_{j}$. It is well-known that injecting artificial noise into the input features will regularize the training objective \cite{wager2013dropout,rifai2011adding,bishop1995training}, i.e. $\mathbb{E}_{\boldsymbol{r}}[\ell(\boldsymbol{w},\tilde{\boldsymbol{x}},y)]=\ell(\boldsymbol{w},\boldsymbol{x},y)+\pi(\boldsymbol{w})$, where $\tilde{\boldsymbol{x}}$ is the feature vector randomly modified by the noise induced by $\boldsymbol{r}$. The regularization term $\pi(\boldsymbol{w})$ is determined by the characteristic of the noise. For example, Wager et al.\cite{wager2013dropout} showed that Dropout, corresponding to inducing blackout noise to the features, helps introduce an adaptive $L_{2}$ penalty on $\boldsymbol{w}$. In this section we illustrate how Shakeout helps regularize model parameters $\boldsymbol{w}$ using an example of GLMs. Formally, a GLM is a probabilistic model of predicting target $y$ given features $\boldsymbol{x}=[x_{1},\dots,x_{p}]$, in terms of the weighted sum in Eq. (\ref{eq:w-sum}): \begin{eqnarray} P(y|\boldsymbol{x},\boldsymbol{w}) & = & h(y)g(\theta)e^{\theta y}\label{eq:GLM-eq}\\ \theta & = & \boldsymbol{w}^{T}\boldsymbol{x}\nonumber \end{eqnarray} With different $h(\cdot)$ and $g(\cdot)$ functions, GLM can be specialized to various useful models or modules, such as logistic regression model or a layer in a feed-forward neural network. However, roughly speaking, the essence of a GLM is similar to that of a standard linear model which aims to find weights $w_{1},\dots,w_{p}$ so that $\theta=\boldsymbol{w}^{T}\boldsymbol{x}$ aligns with $y$ (functions $h(\cdot)$ and $g(\cdot)$ are independent of $\boldsymbol{w}$ and $y$ respectively). The loss function of a GLM with respect to $\boldsymbol{w}$ is defined as \begin{align} l(\boldsymbol{w},\boldsymbol{x},y) & =-\theta y+A(\theta)\label{eq:glm-loss}\\ A(\theta) & =-\ln[g(\theta) \end{align} The loss (\ref{eq:glm-loss}) is the negative logarithm of probability (\ref{eq:GLM-eq}), where we keep only terms relevant to $\boldsymbol{w}$. Let the loss with Shakeout be \begin{equation} \loss_{\textrm{sko}}(\boldsymbol{w},\boldsymbol{x},y,\boldsymbol{r}):=l(\boldsymbol{w},\tilde{\boldsymbol{x}},y)\label{eq:shakeout-loss} \end{equation} where $\boldsymbol{r}=[r_{1},\dots,r_{p}]^{T}$, and $\tilde{\boldsymbol{x}}=[\tilde{x}_{1},\dots,\tilde{x}_{p}]^{T}$ represents the features randomly modified with $\boldsymbol{r}$. Taking expectation over $\boldsymbol{r}$, the loss with Shakeout becomes \[ \mathbb{E}_{\boldsymbol{r}}[\loss_{\textrm{sko}}(\boldsymbol{w},\boldsymbol{x},y,\boldsymbol{r})]=l(\boldsymbol{w},\boldsymbol{x},y)+\mathrm{\pi(\boldsymbol{w})} \] where \begin{eqnarray} \mathrm{\mathrm{\pi(\boldsymbol{w})}} & = & \mathbb{E}_{\boldsymbol{r}}[A(\tilde{\theta})-A(\theta)]\nonumber\\ & = & \sum_{k=1}^{\infty}\frac{1}{k!}A^{(k)}(\theta)\mathbb{E}[(\tilde{\theta}-\theta)^{k}]\label{eq:full-form-regw} \end{eqnarray} is named \textit{Shakeout regularizer}. Note that if $A(\theta)$ is $k$-th order derivable, let the $k^{'}$ order derivative $A^{(k^{'})}(\theta)=0$ where $k^{'}>k$, to make the denotation simple. \newtheorem{theorem}{Theorem} \begin{theorem} \label{thm:shakeout-reg}Let $q_{j}=x_{j}(w_{j}+cs_{j})$, $\theta_{j-}=\theta-q_{j}$ and $\theta_{j+}=\theta+\frac{\tau}{1-\tau}q_{j}$, then Shakeout regularizer $\mathrm{\pi(\boldsymbol{w})}$ is \begin{equation} \mathrm{\pi(\boldsymbol{w})}=\tau\sum_{j=1}^{p}A(\theta_{j-})+(1-\tau)\sum_{j=1}^{p}A(\theta_{j+})-pA(\theta)\label{eq:shakeout-reg-accurate} \end{equation} \end{theorem} \begin{IEEEproof} Note that $\tilde{\theta}-\theta=\sum_{j=1}^{p}q_{j}(r_{j}-1)$, then for Eq. (\ref{eq:full-form-regw}) \begin{eqnarray*} \mathbb{E}[(\tilde{\theta}-\theta)^{k}] & = & \sum_{j_{1}=1}^{p}\sum_{j_{2}=1}^{p}\cdots\sum_{j_{k}=1}^{p}\prod_{m=1}^{k}q_{j_{m}}\mathbb{E}[\prod_{m=1}^{k}(r_{j_{m}}-1)] \end{eqnarray*} Because arbitrary two random variables $r_{j_{m_{1}}}$, $r_{j_{m_{2}}}$ are independent unless $j_{m_{1}}=j_{m_{2}}$ and $\forall r_{j_{m}}$, $\mathbb{E}[r_{j_{m}}-1]=0$, then \begin{eqnarray*} \mathbb{E}[(\tilde{\theta}-\theta)^{k}] & = & \sum_{j=1}^{p}q_{j}^{k}\mathbb{E}[(r_{j}-1)^{k}]\\ & = & \tau\sum_{j=1}^{p}(-q_{j})^{k}+(1-\tau)\sum_{j=1}^{p}(\frac{\tau}{1-\tau}q_{j})^{k} \end{eqnarray*} Then \begin{eqnarray*} \mathrm{\pi(\boldsymbol{w})} & = & \tau\sum_{j=1}^{p}\sum_{k=1}^{\infty}\frac{1}{k!}A^{(k)}(\theta)(-q_{j})^{k}\\ & & +(1-\tau)\sum_{j=1}^{p}\sum_{k=1}^{\infty}\frac{1}{k!}A^{(k)}(\theta)(\frac{\tau}{1-\tau}q_{j})^{k} \end{eqnarray*} Further, let $\theta_{j-}=\theta-q_{j}$, $\theta_{j+}=\theta+\frac{\tau}{1-\tau}q_{j}$, $\mathrm{\pi(\boldsymbol{w})}$ becomes \[ \mathrm{\pi(\boldsymbol{w})}=\tau\sum_{j=1}^{p}A(\theta_{j-})+(1-\tau)\sum_{j=1}^{p}A(\theta_{j+})-pA(\theta) \] The theorem is proved. \end{IEEEproof} We illustrate several properties of Shakeout regularizer based on Eq. (\ref{eq:shakeout-reg-accurate}). The proof of the following propositions can be found in the appendices. \newtheorem{prop}{Proposition} \begin{prop} $\pi(\boldsymbol{0})=0$ \end{prop} \begin{prop} \label{prop:neq0} If $A(\theta)$ is convex, $\mathrm{\pi(\boldsymbol{w})}\geq0$. \end{prop} \begin{prop} \label{prop:tau-and-c}Suppose $\exists j$, $x_{j}w_{j}\neq0$. If $A(\theta)$ is convex, $\mathrm{\pi(\boldsymbol{w})}$ monotonically increases with $\tau$. If $A^{''}(\theta)>0$, $\mathrm{\pi(\boldsymbol{w})}$ monotonically increases with $c$. \end{prop} Proposition \ref{prop:tau-and-c} implies that the hyper-parameters $\tau$ and $c$ relate to the strength of the regularization effect. It is reasonable because higher $\tau$ or $c$ means the noise injected into the features $\boldsymbol{x}$ has larger variance. \begin{prop} \label{prop:sigle-w}Suppose \textit{i}) $\forall j\neq j^{'}$, $x_{j}w_{j}=0$, and \textit{ii}) $x_{j^{'}}\neq0$. Then \textit{i}) if $A^{''}(\theta)>0$, $\begin{cases} \frac{\partial\mathrm{\pi(\boldsymbol{w})}}{\partial w_{j'}}>0, & \textrm{when}\ w_{j'}>0\\ \frac{\partial\mathrm{\pi(\boldsymbol{w})}}{\partial w_{j'}}<0, & \textrm{when}\ w_{j'}<0 \end{cases}$ \textit{ii}) if $\lim_{|\theta|\rightarrow\infty}A^{''}(\theta)=0$, $\lim_{|w_{j'}|\rightarrow\infty}\frac{\partial\mathrm{\pi(\boldsymbol{w})}}{\partial w_{j'}}=0$ \end{prop} Proposition \ref{prop:sigle-w} implies that under certain conditions, starting from a zero weight vector, Shakeout regularizer penalizes the magnitude of $w_{j^{'}}$ and its regularization effect is bounded by a constant value. For example, for logistic regression, $\mathrm{\pi(\boldsymbol{w})}\leq\tau\ln(1+\exp(c|x_{j^{'}}|))$, which is illustrated in Fig. \ref{fig:comparision-with-reg-approx}. This bounded property has been proved to be useful: capped-norm \cite{DBLP:conf/ijcai/JiangNH15} is more robust to outliers than the traditional $L_{1}$ or $L_{2}$ norm. \begin{figure*}[!t] \centering \subfloat[Shakeout regularizer: $\tau=0.3$, $c=0.78$] {\includegraphics[bb=0bp 160bp 595bp 642bp,scale=0.3]{sko_1d} \label{fig:comparision-with-reg-approx-a}} \hfil \subfloat[Dropout regularizer: $\tau=0.5$] {\includegraphics[bb=0bp 160bp 595bp 642bp,scale=0.3]{dpo_1d} \label{fig:comparision-with-reg-approx-b}} \protect\caption{Regularization effect as a function of a single weight when other weights are fixed to zeros for logistic regression model. The corresponding feature $x$ is fixed at 1.} \label{fig:comparision-with-reg-approx} \end{figure*} Based on the Eq. (\ref{eq:shakeout-reg-accurate}), the specific formulas for the representative GLM models can be derived: \textit{i}) Linear regression: $A(\theta)=\frac{1}{2}\theta^{2}$, then \[ \mathrm{\pi(\boldsymbol{w})}=\frac{\tau}{2(1-\tau)}\left\Vert \x\circ(\w+c\boldsymbol{s})\right\Vert _{2}^{2} \] where $\circ$ denotes the element-wise product and the $\left\Vert \x\circ(\w+c\boldsymbol{s})\right\Vert _{2}^{2}$ term can be decomposed into the summation of three components \begin{equation} \sum_{j=1}^{p}x_{j}^{2}w_{j}^{2}+2c\sum_{j=1}^{p}x_{j}^{2}|w_{j}|+c^{2}\sum_{j=1}^{p}x_{j}^{2}\boldsymbol{1}_{w_{j}\neq0}[w_{j}]\label{eq:pen-decomp} \end{equation} where $\boldsymbol{1}_{w_{j}\neq0}[w_{j}]$ is an indicator function which satisfies $\boldsymbol{1}_{w_{j}\neq0}[w_{j}]=\begin{cases} 1 & w_{j}\neq0\\ 0 & w_{j}=0 \end{cases}$. This decomposition implies that Shakeout regularizer penalizes the combination of $L_{0}$-norm, $L_{1}$-norm and $L_{2}$-norm of the weights after scaling them with the square of corresponding features. The $L_{0}$ and $L_{1}$ regularization terms can lead to sparse weights. \textit{ii}) Logistic regression: $A(\theta)=\ln(1+\exp(\theta))$, then \begin{equation} \mathrm{\pi(\boldsymbol{w})}=\sum_{j=1}^{p}\ln(\frac{(1+\exp(\theta_{j-}))^{\tau}(1+\exp(\theta_{j+}))^{1-\tau}}{1+\exp(\theta)})\label{eq:sk-reg-lr} \end{equation} Fig. \ref{fig:contour-of-shakeout-reg} illustrates the contour of Shakeout regularizer based on Eq. (\ref{eq:sk-reg-lr}) in the 2D weight space. On the whole, the contour of Shakeout regularizer indicates that the regularizer combines $L_{0}$, $L_{1}$ and $L_{2}$ regularization terms. As $c$ goes to zero, the contour around $w=0$ becomes less sharper, which implies hyper-parameter $c$ relates to the strength of $L_{0}$ and $L_{1}$ components. When $c=0$, Shakeout degenerates to Dropout, the contour of which implies Dropout regularizer consists of $L_2$ regularization term. The difference between Shakeout and Dropout regularizers is also illustrated in Fig. \ref{fig:comparision-with-reg-approx}. We set $\tau=0.3$, $c=0.78$ for Shakeout, and $\tau=0.5$ for Dropout to make the bounds of the regularization effects of two regularizers the same. In this one dimension circumstance, the main difference is that at $w=0$ (see the enlarged snapshot for comparison), Shakeout regularizer is sharp and discontinuous while Dropout regularizer is smooth. Thus compared to Dropout, Shakeout may lead to much sparser weights of the model. To simplify the analysis and prove the intuition we have observed in Fig. \ref{fig:contour-of-shakeout-reg} about the properties of Shakeout regularizer, we quadratically approximate Shakeout regularizer of Eq. (\ref{eq:shakeout-reg-accurate}) by \begin{equation} \pi_{approx}(\boldsymbol{w})=\frac{\tau}{2(1-\tau)}A^{''}(\theta)\left\Vert \x\circ(\w+c\boldsymbol{s})\right\Vert _{2}^{2} \end{equation} The $\left\Vert \x\circ(\w+c\boldsymbol{s})\right\Vert _{2}^{2}$, already shown in Eq. (\ref{eq:pen-decomp}), consists of the combination of $L_{0}$, $L_{1}$, $L_{2}$ regularization terms. It tends to penalize the weight whose corresponding feature's magnitude is large. Meanwhile, the weights whose corresponding features are always zeros are less penalized. The term $A^{''}(\theta)$ is proportional to the variance of prediction $y$ given $\boldsymbol{x}$ and $\boldsymbol{w}$. Penalizing $A^{''}(\theta)$ encourages the weights to move towards making the model be more "confident" about its predication, i.e. be more discriminative. Generally speaking, Shakeout regularizer adaptively combines $L_{0}$, $L_{1}$ and $L_{2}$ regularization terms, the property of which matches what we have observed in Fig. \ref{fig:contour-of-shakeout-reg}. It prefers penalizing the weights who have large magnitudes and encourages the weights to move towards making the model more discriminative. Moreover, the weights whose corresponding features are always zeros are less penalized. The $L_{0}$ and $L_{1}$ components can induce sparse weights. Last but not the least, we want to emphasize that when $\tau=0$, the noise is eliminated and the model becomes a standard GLM. Moreover, Dropout can be viewed as the special case of Shakeout when $c=0$, and a higher value of $\tau$ means a stronger $L_{2}$ regularization effect imposed on the weights. Generally, when $\tau$ is fixed ($\tau\neq0)$, a higher value of $c$ means a stronger effect of the $L_{0}$ and $L_{1}$ components imposed and leads to much sparser weights of the model. We will verify this property in our experiment section later. \begin{figure*} \centering \includegraphics[bb=100bp 150bp 595bp 642bp,scale=0.35]{sk_reg_03} \includegraphics[bb=70bp 150bp 495bp 642bp,scale=0.35]{sk_reg_02} \centering \includegraphics[bb=100bp 180bp 595bp 642bp,scale=0.35]{sk_reg_01} \includegraphics[bb=70bp 180bp 495bp 642bp,scale=0.35]{dropout_reg} \protect\caption{The contour plots of the regularization effect induced by Shakeout in 2D weight space with input $\boldsymbol{x}=[1,1]^T$. Note that Dropout is a special case of Shakeout with $c=0$.} \label{fig:contour-of-shakeout-reg} \end{figure*} \subsection{Shakeout in Multilayer Neural Networks} It has been illustrated that Shakeout regularizes the weights in linear modules. Linear module is the basic component of multilayer neural networks. That is, the linear operations connect the outputs of two successive layers. Thus Shakeout is readily applicable to the training of multilayer neural networks. Considering the forward computation from layer $l$ to layer $l+1$, for a fully-connected layer, the Shakeout forward computation is as follows \begin{align} u_{i} & =\sum_{j}x_{j}[r_{j}W_{ij}+c(r_{j}-1)S_{ij}]+b_{i}\label{eq:feed-forward-1} \end{align} \begin{align} x^{'}_{i} & =f(u_{i}) \end{align} where $i$ denotes the index of the output unit of layer $l+1$, and $j$ denotes the index of the output unit of layer $l$. The output unit of a layer is represented by $x$. The weight of the connection between unit $x_{j}$ and unit $x^{'}_{i}$ is represented as $W_{ij}$. The bias for the ${i}$-th unit is denoted by $b_{i}$. The $S_{ij}$ is the sign of corresponding weight $W_{ij}$. After Shakeout operation, the linear combination $u_{i}$ is sent to the activation function $f(\cdot)$ to obtain the corresponding output $x^{'}_{i}$. Note that the weights $W_{ij}$ that connect to the same input unit $x_{j}$ are controlled by the same random variable $r_{j}$. During back-propagation, we should compute the gradients with respect to each unit to propagate the error. In Shakeout, $\frac{\partial u_{i}}{\partial x_{j}}$ takes the form \begin{align} \frac{\partial u_{i}}{\partial x_{j}} & =r_{j}(W_{ij}+cS_{ij})-cS_{ij} \end{align} And the weights are updated following \begin{align} \frac{\partial u_{i}}{\partial W_{ij}} & =x_{j}(r_{j}+c(r_{j}-1)\frac{d S_{ij}}{d W_{ij}}) \end{align} where $\frac{dS_{ij}}{dW_{ij}}$ represents the derivative of a $\textrm{sgn}$ function. Because the $\textrm{sgn}$ function is not continuous at zero and thus the derivative is not defined, we approximate this derivative with $\frac{d\tanh(W_{ij})}{dW_{ij}}$. Empirically we find that this approximation works well. Note that the forward-backward computations with Shakeout can be easily extended to the convolutional layer. For a convolutional layer, the Shakeout feed-forward process can be formalized as \begin{equation} \mathbf{U}_{i}=\sum_{j}(\mathbf{X}_{j}\circ\mathbf{R}_{j})*\mathbf{W}_{ij}+c(\mathbf{X}_{j}\circ(\mathbf{R}_{j}-1))*\mathbf{S}_{ij}+b_{i} \end{equation} \begin{equation} \mathbf{X}^{'}_{i}=f(\mathbf{U}_{i}) \end{equation} where $\mathbf{X}_{j}$ represents the $j$-th feature map. $\mathbf{R}_{j}$ is the $j$-th random mask which has the same spatial structure (i.e. the same height and width) as the corresponding feature map $\mathbf{X}_{j}$. $\mathbf{W}_{ij}$ denotes the kernel connecting $\mathbf{X}_{j}$ and $\mathbf{U}_{i}$. And $\mathbf{S}_{ij}$ is set as $\mathrm{sgn}(\mathbf{W}_{ij})$. The symbol {*} denotes the convolution operation. And the symbol $\circ$ means element-wise product. Correspondingly, during the back-propagation process, the gradient with respect to a unit of the layer on which Shakeout is applied takes the form \begin{align} \frac{\partial\mathbf{U}_{i}(a,b)}{\partial\mathbf{X}_{j}(a-a^{'},b-b^{'})} & =\mathbf{R}_{j}(a-a^{'},b-b^{'})(\mathbf{W}_{ij}(a^{'},b^{'})+\nonumber \\ & c\mathbf{S}_{ij}(a^{'},b^{'}))-c\mathbf{S}_{ij}(a^{'},b^{'}) \end{align} where $(a,b)$ means the position of a unit in the output feature map of a layer, and $(a^{'},b^{'})$ represents the position of a weight in the corresponding kernel. The weights are updated following \begin{multline} \frac{\partial\mathbf{U}_{i}(a,b)}{\partial\mathbf{W}_{ij}(a^{'},b^{'})}=\mathbf{X}_{j}(a-a^{'},b-b^{'})(\mathbf{R}_{j}(a-a^{'},b-b^{'})\\ +c(\mathbf{R}_{j}(a-a^{'},b-b^{'})-1)\frac{d \mathbf{S}_{ij}(a^{'},b^{'})}{d \mathbf{W}_{ij}(a^{'},b^{'})}) \end{multline} \section{Experiments} In this section, we report empirical evaluations of Shakeout in training deep neural networks on representative datasets. The experiments are performed on three kinds of image datasets: the hand-written image dataset MNIST \cite{lecun1998gradient}, the CIFAR-10 image dataset \cite{krizhevsky2009learning} and the ImageNet-2012 dataset \cite{ILSVRC15}. MNIST consists of 60,000+10,000 (training+test) 28$\times$28 images of hand-written digits. CIFAR-10 contains 50,000+10,000 (training+test) 32$\times$32 images of 10 object classes. ImageNet-2012 consists of 1,281,167+50,000+150,000 (training+validation+test) variable-resolution images of 1000 object classes. We first demonstrate that Shakeout leads to sparse models as our theoretical analysis implies under the unsupervised setting. Then we show that for the classification task, the sparse models have desirable generalization performances. Further, we illustrate the regularization effect of Shakeout on the weights in the classification task. Moreover, the effect of Shakeout on stabilizing the training processes of the deep architectures is demonstrated. Finally, we give some practical recommendations of Shakeout. All the experiments are implemented based on the modifications of \textit{Caffe} library \cite{jia2014caffe}. Our code is released on the github: https://github.com/kgl-prml/shakeout-for-caffe. \subsection{\label{sub:autoencoder-weight-sparsity}Shakeout and Weight Sparsity} Since Shakeout implicitly imposes $L_{0}$ penalty and $L_{1}$ penalty of the weights, we expect the weights of neural networks learned by Shakeout contain more zeros than those learned by the standard back-propagation (BP) \cite{williams1986learning} or Dropout \cite{hinton2012improving}. In this experiment, we employ an autoencoder model for the MNIST hand-written data, train the model using standard BP, Dropout and Shakeout, respectively, and compare the degree of sparsity of the weights of the learned encoders. For the purpose of demonstration, we employ the simple autoencoder with one hidden layer of 256 units; Dropout and Shakeout are applied on the input pixels. To verify the regularization effect, we compare the weights of the four autoencoders trained under different settings which correspond to standard BP, Dropout ($\tau=0.5$) and Shakeout ($\tau=0.5$, $c=\{1,10\}$). All the training methods aim to produce hidden units which can capture good visual features of the handwritten digits. The statistical traits of these different resulting weights are shown in Fig. \ref{fig:The-distributions-of-AE}. Moreover, Fig. \ref{fig:Learned-weights-of-vi} shows the features captured by each hidden unit of the autoencoders. As shown in the Fig. \ref{fig:The-distributions-of-AE}, the probability density of weights around the zero obtained by standard BP training is quite small compared to the one obtained either by Dropout or Shakeout. This indicates the strong regularization effect induced by Dropout and Shakeout. Furthermore, the sparsity level of weights obtained from training by Shakeout is much higher than the one obtained from training by Dropout. Using the same $\tau$, increasing $c$ makes the weights much sparser, which is consistent with the characteristics of $L_{0}$ penalty and $L_{1}$ penalty induced by Shakeout. Intuitively, we can find that due to the induced $L_{2}$ regularization, the distribution of weights obtained from training by the Dropout is like a Gaussian, while the one obtained from training by Shakeout is more like a Laplacian because of the additionally induced $L_{1}$ regularization. Fig. \ref{fig:Learned-weights-of-vi} shows that features captured by the hidden units via standard BP training are not directly interpretable, corresponding to insignificant variants in the training data. Both Dropout and Shakeout suppress irrelevant weights by their regularization effects, where Shakeout produces much sparser and more global features thanks to the combination of $L_{0}$, $L_{1}$ and $L_{2}$ regularization terms. The autoencoder trained by Dropout or Shakeout can be viewed as the denosing autoencoder, where Dropout or Shakeout injects special kind of noise into the inputs. Under this unsupervised setting, the denoising criterion (i.e. minimizing the error between imaginary images reconstructed from the noisy inputs and the real images without noise) is to guide the learning of useful high level feature representations \cite{vincent2008extracting,vincent2010stacked}. To verify that Shakeout helps learn better feature representations, we adopt the hidden layer activations as features to train SVM classifiers, and the classification accuracies on test set for standard BP, Dropout and Shakeout are 95.34\%, 96.41\% and 96.48\%, respectively. We can see that Shakeout leads to much sparser weights without defeating the main objective. Gaussian Dropout has similar effect on the model training as standard Dropout \cite{srivastava2014dropout}, which multiplies the activation of each unit by a Gaussian variable with mean 1 and variance $\sigma^{2}$. The relationship between $\sigma^{2}$ and $\tau$ is that $\sigma^{2}=\frac{\tau}{1-\tau}$. The distribution of the weights trained by Gaussian Dropout ($\sigma^{2}=1$, i.e. $\tau=0.5$) is illustrated in Fig. \ref{fig:The-distributions-of-AE}. From Fig. \ref{fig:The-distributions-of-AE}, we find no notable statistical difference between two kinds of Dropout implementations which all exhibit a kind of $L_{2}$ regularization effect on the weights. The classification performances of SVM classifiers on test set based on the hidden layer activations as extracted features for both kinds of Dropout implementations are quite similar (i.e. $96.41\%$ and $96.43\%$ for standard and Gaussian Dropout respectively). Due to these observations, we conduct the following classification experiments using standard Dropout as a representative implementation (of Dropout) for comparison. \begin{figure} \centering \includegraphics[bb=0bp 180bp 595bp 662bp,scale=0.4]{journal-autoencoder} \protect\caption{Distributions of the weights of the autoencoder models learned by different training approaches. Each curve in the figure shows the frequencies of the weights of an autoencoder taking particular values, i.e. the empirical population densities of the weights. The five curves correspond to five autoencoders learned by standard back-propagation, Dropout ($\tau=0.5$), Gaussian Dropout ($\sigma^{2}=1$) and Shakeout ($\tau=0.5$, $c=\{1,10\}$). The sparsity of the weights obtained via Shakeout can be seen by comparing the curves. } \label{fig:The-distributions-of-AE} \end{figure} \begin{figure*}[!t] \centering \subfloat[standard BP]{\includegraphics[scale=0.38]{original_0_encode1_weights}} \hfil \subfloat[Dropout: $\tau=0.5$]{\includegraphics[scale=0.38]{dropout_2_encode1_weights}} \hfil \subfloat[Shakeout: $\tau=0.5$, $c=0.5$]{\includegraphics[scale=0.38]{shakeout_2_encode1_weights}} \caption{Features captured by the hidden units of the autoencoder models learned by different training methods. The features captured by a hidden unit are represented by a group of weights that connect the image pixels with this corresponding hidden unit. One image patch in a sub-graph corresponds to the features captured by one hidden unit.} \label{fig:Learned-weights-of-vi} \end{figure*} \subsection{Classification Experiments} Sparse models often indicate lower complexity and better generalization performance \cite{tibshirani1996regression,zou2005regularization,olshausen1997sparse,yuan2013efficient}. To verify the effect of $L_{0}$ and $L_{1}$ regularization terms induced by Shakeout on the model performance, we apply Shakeout, along with Dropout and standard BP, on training representative deep neural networks for classification tasks. In all of our classification experiments, the hyper-parameters $\tau$ and $c$ in Shakeout, and the hyper-parameter $\tau$ in Dropout are determined by validation. \subsubsection{MNIST} We train two different neural networks, a shallow fully-connected one and a deep convolutional one. For the fully-connected neural network, a big hidden layer size is adopted with its value at 4096. The non-linear activation unit adopted is the rectifier linear unit (ReLU). The deep convolutional neural network employed is based on the modifications of the LeNet \cite{lecun1998gradient}, which contains two convolutional layers and two fully-connected layers. The detailed architecture information of this convolutional neural network is described in Tab. \ref{tab:The-architechture-of-conv}. \begin{table} \protect\caption{\label{tab:The-architechture-of-conv}The architecture of convolutional neural network adopted for MNIST classification experiment } \centering{}% \begin{tabular}{|c|c|c|c|c|} \hline Layer & 1 & 2 & 3 & 4\tabularnewline \hline \hline Type & conv. & conv. & FC & FC\tabularnewline \hline Channels & 20 & 50 & 500 & 10\tabularnewline \hline Filter size & $5\times5$ & $5\times5$ & - & -\tabularnewline \hline Conv. stride & 1 & 1 & - & -\tabularnewline \hline Pooling type & max & max & - & -\tabularnewline \hline Pooling size & $2\times2$ & $2\times2$ & - & -\tabularnewline \hline Pooling stride & 2 & 2 & - & -\tabularnewline \hline Non-linear & ReLU & ReLU & ReLU & Softmax\tabularnewline \hline \end{tabular} \end{table} We separate 10,000 training samples from original training dataset for validation. The results are shown in Tab. \ref{tab:MNIST-test-classification-fc} and Tab. \ref{tab:MNIST-test-classification-cov}. Dropout and Shakeout are applied on the hidden units of the fully-connected layer. The table compares the errors of the networks trained by standard back-propagation, Dropout and Shakeout. The mean and standard deviation of the classification errors are obtained by 5 runs of the experiment and are shown in percentage. We can see from the results that when the training data is not sufficient enough, due to over-fitting, all the models perform worse. However, the models trained by Dropout and Shakeout consistently perform better than the one trained by standard BP. Moreover, when the training data is scarce, Shakeout leads to superior model performance compared to the Dropout. Fig. \ref{fig:MNIST-test-classification} shows the results in a more intuitive way. \begin{table}[!t] \protect\caption{Classification on MNIST using training sets of different sizes: fully-connected neural network} \label{tab:MNIST-test-classification-fc} \centering{}% \begin{tabular}{|c|c|c|c|} \hline Size & std-BP & Dropout & Shakeout \tabularnewline \hline \hline 500 & 13.66$\pm$0.66 & 11.76$\pm$0.09 & \textbf{10.81}$\pm$0.32\tabularnewline \hline 1000 & 8.49$\pm$0.23 & 8.05$\pm$0.05 & \textbf{7.19}$\pm$0.15\tabularnewline \hline 3000 & 5.54$\pm$0.09 & 4.87$\pm$0.06 & \textbf{4.60}$\pm$0.07\tabularnewline \hline 8000 & 3.57$\pm$0.14 & \textbf{2.95}$\pm$0.05 & 2.96$\pm$0.09\tabularnewline \hline 20000 & 2.28$\pm$0.09 & \textbf{1.82}$\pm$0.07 & 1.92$\pm$0.06\tabularnewline \hline 50000 & 1.55$\pm$0.03 & 1.36$\pm$0.03 & \textbf{1.35}$\pm$0.07\tabularnewline \hline \end{tabular} \end{table} \begin{figure*} \centering \subfloat[Fully-connected neural network]{\includegraphics[bb=0bp 180bp 595bp 600bp,scale=0.35]{mnist_fc_error}} \hfil \subfloat[Convolutional neural network]{\includegraphics[bb=0bp 180bp 595bp 600bp,scale=0.35]{mnist_convNet_error}} \protect\caption{Classification of two kinds of neural networks on MNIST using training sets of different sizes. The curves show the performances of the models trained by standard BP, and those by Dropout and Shakeout applied on the hidden units of the fully-connected layer.} \label{fig:MNIST-test-classification} \end{figure*} \subsubsection{CIFAR-10}\label{sec:cifar-10-sec} We use the simple convolutional network feature extractor described in cuda-convnet (layers-80sec.cfg) \cite{krizhevskycuda}. We apply Dropout and Shakeout on the first fully-connected layer. We call this architecture ``AlexFastNet'' for the convenience of description. In this experiment, 10,000 colour images are separated from the training dataset for validation and no data augmentation is utilized. The per-pixel mean computed over the training set is subtracted from each image. We first train for 100 epochs with an initial learning rate of 0.001 and then another 50 epochs with the learning rate of 0.0001. The mean and standard deviation of the classification errors are obtained by 5 runs of the experiment and are shown in percentage. \begin{table}[!t] \protect\caption{Classification on MNIST using training sets of different sizes: convolutional neural network} \label{tab:MNIST-test-classification-cov} \centering{}% \begin{tabular}{|c|c|c|c|} \hline Size & std-BP & Dropout & Shakeout\tabularnewline \hline \hline 500 & 9.76$\pm$0.26 & 6.16$\pm$0.23 & \textbf{4.83}$\pm$0.11\tabularnewline \hline 1000 & 6.73$\pm$0.12 & 4.01$\pm$0.16 & \textbf{3.43}$\pm$0.06\tabularnewline \hline 3000 & 2.93$\pm$0.10 & 2.06$\pm$0.06 & \textbf{1.86}$\pm$0.13\tabularnewline \hline 8000 & 1.70$\pm$0.03 & \textbf{1.23}$\pm$0.13 & 1.31$\pm$0.06\tabularnewline \hline 20000 & 0.97$\pm$0.01 & 0.83$\pm$0.06 & \textbf{0.77}$\pm$$0$.001\tabularnewline \hline 50000 & 0.78$\pm$0.05 & 0.62$\pm$0.04 & \textbf{0.58}$\pm$0.10\tabularnewline \hline \end{tabular} \end{table} \begin{table}[!t] \protect\caption{Classification on CIFAR-10 using training sets of different sizes: AlexFastNet } \label{tab:CIFAR10-test-classification-quick} \centering{}% \begin{tabular}{|c|c|c|c|} \hline Size & std-BP & Dropout & Shakeout\tabularnewline \hline \hline 300 & 68.26$\pm$0.57 & 65.34$\pm$0.75 & \textbf{63.71}$\pm$0.28\tabularnewline \hline 700 & 59.78$\pm$0.24 & 56.04$\pm$0.22 & \textbf{54.66}$\pm$0.22\tabularnewline \hline 2000 & 50.73$\pm$0.29 & 46.24$\pm$0.49 & \textbf{44.39}$\pm$0.41\tabularnewline \hline 5500 & 41.41$\pm$0.52 & 36.01$\pm$0.13 & \textbf{34.54}$\pm$0.31\tabularnewline \hline 15000 & 32.53$\pm$0.25 & 27.28$\pm$0.26 & \textbf{26.53}$\pm$0.17\tabularnewline \hline 40000 & 24.48$\pm$0.23 & \textbf{20.50}$\pm$0.32 & 20.56$\pm$0.12\tabularnewline \hline \end{tabular} \end{table} As shown in Tab. \ref{tab:CIFAR10-test-classification-quick}, the performances of models trained by Dropout and Shakeout are consistently superior to the one trained by standard BP. Furthermore, the model trained by Shakeout also outperforms the one trained by Dropout when the training data is scarce. Fig. \ref{fig:CIFAR10-test-classification-quick} shows the results in a more intuitive way. \begin{figure} \centering \includegraphics[bb=0bp 200bp 595bp 620bp,scale=0.35]{cifar10_quick_error} \protect\caption{Classification on CIFAR-10 using training sets of different sizes. The curves show the performances of the models trained by standard BP, and those by Dropout and Shakeout applied on the hidden units of the fully-connected layer.} \label{fig:CIFAR10-test-classification-quick} \end{figure} To test the performance of Shakeout on a much deeper architecture, we also conduct experiments based on the Wide Residual Network (WRN) \cite{DBLP:conf/bmvc/ZagoruykoK16}. The configuration of WRN adopted is WRN-16-4, which means WRN has 16 layers in total and the number of feature maps for the convolutional layer of each residual block is 4 times as the corresponding original one \cite{he2016identity}. Because the complexity is much higher than that of ``AlexFastNet'', the experiments are performed on relatively larger training sets with sizes of 15000, 40000, 50000. Dropout and Shakeout are applied on the second convolutional layer of each residual block, following the protocol in \cite{DBLP:conf/bmvc/ZagoruykoK16}. All the training starts from the same initial weights. Batch Normalization is applied the same way as \cite{DBLP:conf/bmvc/ZagoruykoK16} to promote the optimization. No data-augmentation or data pre-processing is adopted. All the other hyper-parameters other than $\tau$ and $c$ are set the same as \cite{DBLP:conf/bmvc/ZagoruykoK16}. The results are listed in Tab. \ref{tab:w16-4-cifar10}. For the training of CIFAR-10 with 50000 training samples, we adopt the same hyper-parameters as those chosen in the training with training set size at 40000. From Tab. \ref{tab:w16-4-cifar10}, we can arrive at the same conclusion as previous experiments, i.e. the performances of the models trained by Dropout and Shakeout are consistently superior to the one trained by standard BP. Moreover, Shakeout outperforms Dropout when the data is scarce. \begin{table}[!t] \begin{centering} \protect\caption{Classification on CIFAR-10 using training sets of different sizes: WRN-16-4 } \label{tab:w16-4-cifar10} \par\end{centering} \centering{}% \begin{tabular}{|c|c|c|c|} \hline Size & std-BP & Dropout & Shakeout\tabularnewline \hline \hline 15000 & 20.95 & 15.05 & \textbf{14.68}\tabularnewline \hline 40000 & 15.37 & 9.32 & \textbf{9.01}\tabularnewline \hline 50000 & 14.39 & 8.03 & \textbf{7.97}\tabularnewline \hline \end{tabular} \end{table} \begin{figure*}[!t] \centering \subfloat[AlexNet FC7 layer]{\includegraphics[bb=0bp 180bp 595bp 650bp,scale=0.35]{compare_weights_fc7}} \hfil \subfloat[AlexNet FC8 layer]{\includegraphics[bb=30bp 180bp 595bp 650bp,scale=0.35]{compare_weights_fc8}} \protect\caption{Comparison of the distributions of the magnitude of weights trained by Dropout and Shakeout. The experiments are conducted using AlexNet on ImageNet-2012 dataset. Shakeout or Dropout is applied on the last two fully-connected layers, i.e. FC7 layer and FC8 layer.} \label{fig:classification-sparse} \end{figure*} \begin{figure*}[!t] \centering \subfloat[AlexNet FC7 layer]{\includegraphics[bb=0bp 180bp 595bp 650bp,scale=0.35]{compare_group_fc7}} \hfil \subfloat[AlexNet FC8 layer]{\includegraphics[bb=30bp 180bp 595bp 650bp,scale=0.35]{compare_group_fc8}} \caption{Distributions of the maximum magnitude of the weights connected to the same input unit of a layer. The maximum magnitude of the weights connected to one input unit can be regarded as a metric of the importance of that unit. The experiments are conducted using AlexNet on ImageNet-2012 dataset. For Shakeout, the units can be approximately separated into two groups and the one around zero is less important than the other, whereas for Dropout, the units are more concentrated.} \label{fig:grouping-effect} \end{figure*} \subsubsection{Regularization Effect on the Weights} Shakeout is a different way to regularize the training process of deep neural networks from Dropout. For a GLM model, we have proved that the regularizer induced by Shakeout adaptively combines $L_{0}$, $L_{1}$ and $L_{1}$ regularization terms. In section \ref{sub:autoencoder-weight-sparsity}, we have demonstrated that for a one-hidden layer autoencoder, it leads to much sparser weights of the model. In this section, we will illustrate the regularization effect of Shakeout on the weights in the classification task and make a comparison to that of Dropout. The results shown in this section are mainly based on the experiments conducted on ImageNet-2012 dataset using the representative deep architecture: AlexNet \cite{krizhevsky2012imagenet}. For AlexNet, we apply Dropout or Shakeout on layers FC7 and FC8 which are the last two fully-connected layers. We train the model from the scratch and obtain the comparable classification performances on validation set for Shakeout (top-1 error: 42.88\%; top-5 error: 19.85\%) and Dropout (top-1 error: 42.99\%; top-5 error: 19.60\%). The model is trained based on the same hyper-parameter settings provided by Shelhamer in \textit{Caffe} \cite{jia2014caffe} other than the hyper-parameters $\tau$ and $c$ for Shakeout. The initial weights for training by Dropout and Shakeout are kept the same. Fig. \ref{fig:classification-sparse} illustrates the distributions of the magnitude of weight resulted by Shakeout and Dropout. It can be seen that the weights learned by Shakeout are much sparser than those learned by Dropout, due to the implicitly induced $L_{0}$ and $L_{1}$ components. The regularizer induced by Shakeout not only contains $L_{0}$ and $L_{1}$ regularization terms but also contains $L_{2}$ regularization term, the combination of which is expected to discard a group of weights simultaneously. In Fig. \ref{fig:grouping-effect}, we use the maximum magnitude of the weights connected to one input unit of a layer to represent the importance of that unit for the subsequent output units. From Fig. \ref{fig:grouping-effect}, it can be seen that for Shakeout, the units can be approximately separated into two groups according to the maximum magnitudes of the connected weights and the group around zero can be discarded, whereas for Dropout, the units are concentrated. This implies that compared to Dropout which may encourage a ``distributed code'' for the features captured by the units of a layer, Shakeout tends to discard the useless features (or units) and award the important ones. This experiment result verifies the regularization properties of Shakeout and Dropout further. As known to us, $L_{0}$ and $L_{1}$ regularization terms are related to performing feature selection \cite{guyon2003introduction,7346492}. For a deep architecture, it is expected to obtain a set of weights using Shakeout suitable for reflecting the importance of connections between units. We perform the following experiment to verify this effect. After a model is trained, for the layer on which Dropout or Shakeout is applied, we sort the magnitudes of the weights increasingly. Then we prune the first $m\%$ of the sorted weights and evaluate the performance of the pruned model again. The pruning ratio $m$ goes from 0 to 1. We calculate the relative accuracy loss (we write $R.A.L$ for simplification) at each pruning ratio $m^{'}$ as \[ R.A.L(m^{'})=\frac{Accu.(m=0)-Accu.(m^{'})}{Accu.(m=0)} \] Fig. \ref{fig:relative-accuracy-drop} shows the $R.A.L$ curves for Dropout and Shakeout based on the AlexNet model on ImageNet-2012 dataset. The models trained by Dropout and Shakeout are under the optimal hyper-parameter settings. Apparently, the relative accuracy loss for Dropout is more severe than that for Shakeout. For example, the largest margin of the relative accuracy losses between Dropout and Shakeout is $22.50\%$, which occurs at the weight pruning ratio $m=96\%$. This result proves that considering the trained weights in reflecting the importance of connections, Shakeout is much better than Dropout, which benefits from the implicitly induced $L_{0}$ and $L_{1}$ regularization effect. \setcounter{figure}{10} \begin{figure*} \centering \subfloat[standard BP] {\includegraphics[bb=0bp 150bp 595bp 652bp,scale=0.28]{standard-bp-loss-curve} \label{fig:DCGAN-objective-value-a} } \subfloat[Dropout] {\includegraphics[bb=0bp 150bp 595bp 652bp,scale=0.28]{dropout-loss-curve} \label{fig:DCGAN-objective-value-b} } \subfloat[Shakeout] {\includegraphics[bb=0bp 150bp 595bp 652bp,scale=0.28]{shakeout-loss-curve} \label{fig:DCGAN-objective-value-c} } \caption{The value of $-V(D,G)$ as a function of iteration for the training process of DCGAN. DCGANs are trained using standard BP, Dropout and Shakeout for comparison. Dropout or Shakeout is applied on the discriminator of GAN.} \label{fig:DCGAN-objective-value} \end{figure*} This kind of property is useful for the popular compression task in deep learning area which aims to cut the connections or throw units of a deep neural network to a maximum extent without obvious loss of accuracy. The above experiments illustrate that Shakeout can play a considerable role in selecting important connections, which is meaningful for promoting the performance of a compression task. This is a potential subject for the future research. \setcounter{figure}{9} \begin{figure} \centering \includegraphics[bb=30bp 165bp 575bp 685bp,scale=0.30]{alexnet_weight_pruning} \caption{Relative accuracy loss as a function of the weight pruning ratio for Dropout and Shakeout based on AlexNet architecture on ImageNet-2012. The relative accuracy loss for Dropout is much severe than that for Shakeout. The largest margin of the relative accuracy losses between Dropout and Shakeout is $22.50\%$, which occurs at the weight pruning ratio $m=96\%$.} \label{fig:relative-accuracy-drop} \end{figure} \setcounter{figure}{11} \subsection{Stabilization Effect on the Training Process} In both research and production, it is always desirable to have a level of certainty about how a model\textquoteright s fitness to the data improves over optimization iterations, namely, to have a \textit{stable} training process. In this section, we show that Shakeout helps reduce fluctuation in the improvement of model fitness during training. The first experiment is on the family of Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}, which is known to be instable in the training stage \cite{radford2015unsupervised, arjovsky2017towards, arjovsky2017wasserstein}. The purpose of the following tests is to demonstrate the Shakeout\textquoteright s capability of stabilizing the training process of neural networks in a general sense. GAN plays a min-max game between the generator $G$ and the discriminator $D$ over the expected log-likelihood of real data $\boldsymbol{x}$ and imaginary data $\hat{\boldsymbol{x}}=G(\boldsymbol{z})$ where $\boldsymbol{z}$ represents the random input \begin{equation} \min_{G}\max_{D}V(D,G)=\mathbb{E}[\log[D(\boldsymbol{x})] +\log[1-D(G(\boldsymbol{z}))]] \end{equation} The architecture that we adopt is DCGAN \cite{radford2015unsupervised}. The numbers of feature maps of the deconvolutional layers in the generator are 1024, 64 and 1 respectively, with the corresponding spatial sizes 7$\times$7, 14$\times$14 and 28$\times$28. We train DCGANs on MNIST dataset using standard BP, Dropout and Shakeout. We follow the same experiment protocol described in \cite{radford2015unsupervised} except for adopting Dropout or Shakeout on all layers of the discriminator. The values of $-V(D,G)$ during training are illustrated in Fig. \ref{fig:DCGAN-objective-value}. It can be seen that $-V(D,G)$ during training by standard BP oscillates greatly, while for Dropout and Shakeout, the training processes are much steadier. Compared with Dropout, the training process by Shakeout has fewer spikes and is smoother. Fig. \ref{fig:gan-minmax} demonstrates the minimum and maximum values of $-V(D,G)$ within fixed length intervals moving from the start to the end of the training by standard BP, Dropout and Shakeout. It can be seen that the gaps between the minimum and maximum values of $-V(D,G)$ trained by Dropout and Shakeout are much smaller than that trained by standard BP, while that by Shakeout is the smallest, which implies the stability of the training process by Shakeout is the best. \begin{figure} \centering \includegraphics[bb=30bp 170bp 575bp 690bp,scale=0.32]{gan-min-max-02} \caption{The minimum and maximum values of $-V(D,G)$ within fixed length intervals moving from the start to the end of the training by standard BP, Dropout and Shakeout. The optimal value log(4) is obtained when the imaginary data distribution $P(\hat{\boldsymbol{x}})$ matches with the real data distribution $P(\boldsymbol{x})$.} \label{fig:gan-minmax} \end{figure} The second experiment is based on Wide Residual Network architecture to perform the classification task. In the classification task, generalization performance is the main focus and thus, we compare the validation errors during the training processes by Dropout and Shakeout. Fig. \ref{fig:CIFAR10-W16-4-training-curve} demonstrates the validation error as a function of the training epoch for Dropout and Shakeout on CIFAR-10 with 40000 training examples. The architecture adopted is WRN-16-4. The experiment settings are the same as those described in Section \ref{sec:cifar-10-sec}. Considering the generalization performance, the learning rate schedule adopted is the one optimized through validation to make the models obtain the best generalization performances. Under this schedule, we find that the validation error temporarily increases when lowering the learning rate at the early stage of training, which has been repeatedly observed by \cite{DBLP:conf/bmvc/ZagoruykoK16}. Nevertheless, it can be seen from Fig. \ref{fig:CIFAR10-W16-4-training-curve} that the extent of error increase is less severe for Shakeout than Dropout. Moreover, Shakeout recovers much faster than Dropout does. At the final stage, both of the validation errors steadily decrease. Shakeout obtains comparable or even superior generalization performance to Dropout. In a word, Shakeout significantly stabilizes the entire training process with superior generalization performance. \begin{figure}[!t] \centering \includegraphics[bb=0bp 160bp 630bp 700bp,scale=0.40]{20-200-sk02005} \protect\caption{Validation error as a function of training epoch for Dropout and Shakeout on CIFAR-10 with training set size at 40000. The architecture adopted is WRN-16-4. \textquotedblleft DPO" and \textquotedblleft SKO" represent \textquotedblleft Dropout" and \textquotedblleft Shakeout" respectively. The following two numbers denote the hyper-parameters $\tau$ and $c$ respectively. The learning rate decays at epoch 60, 120, and 160. After the first decay of learning rate, the validation error increases greatly before the steady decrease (see the enlarged snapshot for training epochs from 60 to 80). It can be seen that the extent of error increase is less severe for Shakeout than Dropout. Moreover, Shakeout recovers much faster than Dropout does. At the final stage, both of the validation errors steadily decrease (see the enlarged snapshot for training epochs from 160 to 200). Shakeout obtains comparable or even superior generalization performance to Dropout.} \label{fig:CIFAR10-W16-4-training-curve} \end{figure} \subsection{Practical Recommendations} \noindent \textbf{\textit{Selection of Hyper-parameters}} The most practical and popular way to perform hyper-parameter selection is to partition the training data into a training set and a validation set to evaluate the classification performance of different hyper-parameters on it. Due to the expensive cost of time for training a deep neural network, cross-validation is barely adopted. There exist many hyper-parameter selection methods in the domain of deep learning, such as the grid search, random search \cite{bergstra2012random}, Bayesian optimization methods \cite{snoek2012practical}, gradient-based hyper-parameter Optimization \cite{maclaurin2015gradient}, etc. For applying Shakeout on a deep neural network, we need to decide two hyper-parameters $\tau$ and $c$. From the regularization perspective, we need to decide the most suitable strength of regularization effect to obtain an optimal trade-off between model bias and variance. We have pointed out that in a unified framework, Dropout is a special case of Shakeout when Shakeout hyper-parameter $c$ is set to zero. Empirically we find that the optimal $\tau$ for Shakeout is not higher than that for Dropout. After determining the optimal $\tau$, keeping the order of magnitude of hyper parameter $c$ the same as $\sqrt{\frac{1}{N}}$ ($N$ represents the number of training samples) is an effective choice. If you want to obtain a model with much sparser weights but meanwhile with superior or comparable generalization performance to Dropout, a relatively lower $\tau$ and larger $c$ for Shakeout always works. \noindent \textbf{\textit{Shakeout combined with Batch Normalization}} Batch Normalization \cite{DBLP:conf/icml/IoffeS15} is the widely-adopted technique to promote the optimization of the training process for a deep neural network. In practice, combining Shakeout with Batch Normalization to train a deep architecture is a good choice. For example, we observe that the training of WRN-16-4 model on CIFAR-10 is slow to converge without using Batch Normalization in the training. Moreover, the generalization performance on the test set for Shakeout combined with Batch Normalization always outperforms that for standard BP with Batch Normalization consistently for quite a large margin, as illustrated in Tab. \ref{tab:w16-4-cifar10}. These results imply the important role of Shakeout in reducing over-fitting of a deep neural network. \section{Conclusion} We have proposed Shakeout, which is a new regularized training approach for deep neural networks. The regularizer induced by Shakeout is proved to adaptively combine $L_{0}$, $L_{1}$ and $L_{2}$ regularization terms. Empirically we find that 1) Compared to Dropout, Shakeout can afford much larger models. Or to say, when the data is scarce, Shakeout outperforms Dropout with a large margin. 2) Shakeout can obtain much sparser weights than Dropout with superior or comparable generalization performance of the model. While for Dropout, if one wants to obtain the same level of sparsity as that obtained by Shakeout, the model may bear a significant loss of accuracy. 3) Some deep architectures in nature may result in the instability of the training process, such as GANs, however, Shakeout can reduce this instability effectively. In future, we want to put emphasis on the inductive bias of Shakeout and attempt to apply Shakeout to the compression task. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This research is supported by Australian Research Council Projects (No. FT-130101457, DP-140102164 and LP-150100671). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2019-04-16T02:12:41", "yymm": "1904", "arxiv_id": "1904.06593", "language": "en", "url": "https://arxiv.org/abs/1904.06593" }
\section{Prologue} In recent years high-energy nuclear collisions at RHIC and the LHC have revealed strong indications for collective flow with hydrodynamic characteristics even in so-called ``small''\footnote{More about those quotation marks later.} collision systems (p-p, p-Au, d-Au, $^3$He-Au, and p-Pb; see e.g. the reviews \cite{Dusling:2015gta, Li:2017qvf, Floris:2019klr}). The question in the subtitle above is one that I get frequently asked in this context. Let me start by explaining that it is the wrong question to ask. To illustrate my point allow me to consider a world without quarks where the strong interaction is described by an SU(3) gauge theory (``QCD'') which contains only gluons in its color-deconfined ``gluon plasma'' state and only glueballs (G) in its color-confined hadronic phase. In such a world a GG collision at LHC energies would create a gluon plasma with similar initial energy ($e$), entropy ($s$) and (if it allows for a quasiparticle description) total particle density ($n$) as the quark-gluon plasma created in a pp collision in our world at the real LHC. The equation of state (EoS) $p(e)$, speed of sound $c_s(e)$, and transport coefficients (such as the specific shear and bulk viscosities $\eta/s$, $\zeta/s$) of this gluon plasma will be very similar to those of the quark-gluon plasma in our world where these quantities are all dominated by the interactions with and among gluons. So the dynamical evolution of the gluon plasma created in GG collisions in this imaginary world will look qualitatively similar to that of the quark-gluon plasma created in pp collisions at the real LHC. If the total entropy of the collision fireball is enough to create, say, a dozen charged hadrons per unit pseudorapidity, corresponding to $dN/d\eta\simeq20$ if you include neutrals, in our real world where most of these hadrons are pions with a rest mass of 140\,MeV, it would not even suffice to create two glueballs G per unit rapidity, $dN_G/d\eta\simeq2$, in the imaginary world if the lightest glueballs had a mass of 1.5\,GeV or more. Does this imply that the gluon plasma created in a GG collision with $dN_G/d\eta{\,=\,}2$ in the imaginary world evolves less hydrodynamically than the quark-gluon plasma in a pp collision with $dN_\mathrm{ch}/d\eta{\,=\,}12$ in the real world? Obviously not. That the former collision has much fewer particles in the final state than the latter is a cruel joke of Nature who forces the partition of the system's energy into a small number of very heavy final state hadrons in the glueball world while creating, under the same initial conditions, an order of magnitude more final-state hadrons in our real world. If pions where lighter (say 10\,MeV instead of 140\,MeV), that same pp collision would create about 300 hadrons per unit rapidity in its final state, the same order of magnitude as measured in off-center PbPb collisions at the LHC where few physicists doubt the validity of the hydrodynamic flow paradigm \cite{Heinz:2009xj, Heinz:2013th, Gale:2013da, Romatschke:2017ejr}. The quantization of emitted energy in heavy chunks implies that the underlying fluid dynamical behavior cannot be sampled continuously and suffers from finite number statistical fluctuations --- even more so in the glueball world than in ours --- such that its exploration requires averaging over many similar collision events (same collision system, centrality and collision energy) in order to sample the underlying physics with sufficient statistical precision. So, while the gluon plasma created in the GG collision of our imaginary glueball world may exhibit almost identical hydrodynamic flow patterns to the quark-gluon plasma in a pp collision at the LHC with the same initial entropy per unit rapidity, these patterns would be much harder to discern in the GG collision, due to much larger finite number statistical fluctuations in the final state. This doesn't mean, however, that no such patterns exists -- it is just difficult to distill them from the strongly fluctuating observables. I hope that this {\it Gedankenexperiment} convinces you that the absolute value of the number of final state hadrons per unit rapidity is a poor criterium for (pre-)judging the applicability of the hydrodynamic flow paradigm. Final state hadrons are only created at the end of the collision when the quark-gluon plasma hadronizes. Afterwards the hydrodynamic model quickly breaks down, due to the short-range nature of the residual ``strong'' interactions between color-neutral hadrons. The final-state hadrons are not responsible for the interactions that control the system's evolution towards local thermal equilibrium in its color-deconfined liquid stage -- its EoS, speed of sound, and its transport properties. In its liquid state, the strong open-color interactions in the quark-gluon plasma may even largely invalidate its description in terms of well-defined quasiparticles, again making the number of particle degrees of freedom per unit rapidity a poor criterium for (pre-)judging its ability to develop hydrodynamic flow. The absolute value of the (initial) entropy per unit space-time rapidity, $dS/d\eta_s= \tau_0 \int d^2r_\perp s(\bm{r}_\perp,\eta,\tau_0)$, on the other hand, which is (on average) monotonically related to the final state charged hadron pseudorapidity density $dN_\mathrm{ch}/d\eta$ \cite{Shen:2015qta}, remains well-defined even in strongly-coupled quantum field theories without good quasi-particles, and thus may be a better starting point for a breakdown criterium of the hydrodynamic paradigm (see, e.g., \cite{Basar:2013hea, Romatschke:2017ejr, Kurkela:2018wud}). \section{The ``unreasonable effectiveness'' of hydrodynamics for nuclear collisions} In the last decade, relativistic dissipative (``viscous'') fluid dynamics has become the workhorse of dynamical modeling of ultra-relativistic heavy-ion collisions \cite{Heinz:2009xj, Heinz:2013th, Gale:2013da, Romatschke:2017ejr}. In spite of the extraordinarily rapid expansion of the collision fireball, with dramatically different expansion rates along the beam direction (due to the inability of the two colliding nuclei to stop each other \cite{Bjorken:1982qr}) and in the transverse directions where the expansion is driven by pressure gradients and starts from zero, which generates large shear stresses, the hydrodynamic model has proven to possess high predictive power. (An early example is shown in Fig.~\ref{F1}.) \begin{figure}[h] \includegraphics[width=\linewidth]{./figs/ALICE} \caption{\small Differential elliptic flow for pions, kaons and protons in semi-central (left) and semi-peripheral (right) Pb-Pb collisions at the LHC, as reported by the ALICE Collaboration at the Quark Matter 2011 conference \cite{Krzewicki:2011ee}. Solid lines show hydrodynamic \textbf{\textit{pre}}dictions from \cite{Shen:2011eg}. \label{F1} } \end{figure} It works even in ``small'' collision systems, such as p-Pb and p-p at the LHC \cite{Weller:2017tsr} (see Fig.~\ref{F2}) or p-Au, d-Au, and $^3$He-Au at RHIC \cite{PHENIX:2018lia}, as long as subnucleonic fluctuations in the initial energy deposition are appropriately accounted for \cite{Welsh:2016siu}. \begin{figure}[h] \includegraphics[width=\linewidth]{./figs/Weller} \caption{\small Differential elliptic ($v_2$), triangular ($v_3$), and quadrangular flow ($v_4$) for charged hadrons from p-p (left), p-Pb (middle) and PbPb (right) collisions at the LHC, compared with hydrodynamic model simulations using the superSONIC code package \cite{Romatschke:2015gxa}. Figure taken from \cite{Weller:2017tsr}. \label{F2} } \end{figure} The largest uncertainties in comparing data from such small systems with fluid dynamical code packages (e.g. iEBE-VISHNU \cite{Shen:2014vra} (\textcolor{blue}{\url{https://u.osu.edu/vishnu}}), superSONIC \cite{Romatschke:2015gxa} (\textcolor{blue}{\url{https://sites.google.com/site/revihy/download}}), or MUSIC \cite{Schenke:2010rr} (\textcolor{blue}{\url{http://www.physics.mcgill.ca/music/}})) does not appear to arise from the applicability of the hydrodynamic model, but from our lack of precise knowledge of the internal structure of the nucleon, i.e. the distribution and event-by-event fluctuations of the gluon density inside protons and neutrons \cite{Welsh:2016siu}. Let me explain now why I use quotation marks when writing about ``small'' collision systems. As argued above, a good starting point for (pre-)judging the applicability of hydrodynamics is the total entropy per unit space-time rapidity $dS/d\eta_s$ deposited in the collision zone. Although, for a given collision configuration, $dS/d\eta_s$ is monotonically related to the final charged hadron pseudorapidity density $dN_\mathrm{ch}/d\eta$, the proportionality constant depends on the additional entropy produced by viscous heating during the expansion, and the latter increases with the fireball expansion rate. Fig.~\ref{F3} compares isotherms along the short and long directions of elliptically deformed fireballs created in peripheral Pb-Pb, central p-Pb, and high-multiplicity p-p collisions at $\sqrt{s_{_\mathrm{NN}}}=5.02$\,TeV with the same final charged hadron pseudorapidity density $dN_\mathrm{ch}/d\eta=100$. (For comparison the right column shows the corresponding isotherms for less extreme p-p collisions with a five times smaller final multiplicity.) \begin{figure}[h] \includegraphics[width=0.72\linewidth]{./figs/isotherms} \includegraphics[width=0.265\linewidth]{./figs/isotherms_pp.pdf} \caption{\small Isotherms of temperatures $T=200$ (blue), 155 (orange) and 100\,MeV (green) along the short ($x$, top row) and long ($y$, bottom row) directions of an ensemble-averaged fireball constructed from elliptically deformed and aligned fluctuating initial entropy density profiles from the T$_\mathrm{R}$ENTo model with exponent $p=0$ \cite{Moreland:2014oya} (similar to IP-GLASMA initial conditions) for Pb-Pb (left), p-Pb (center-left), and p-p collisions (center-right and right columns) at $\sqrt{s_{_\mathrm{NN}}}=5.02$\,TeV. The three left columns compare events from the three different collision systems with the same charged hadron pseudorapidity density $dN_\mathrm{ch}/d\eta=100$ in the final state. The right column shows for comparison the corresponding isotherms for p-p collisions at the same collision energy but with five times smaller final pseudorapidity density $dN_\mathrm{ch}/d\eta=20$. All events were evolved with iEBE-VISHNU \cite{Shen:2014vra} using transport coefficients and other model parameters determined by Bayesian model calibration \cite{Bernhard:2016tnd}. \label{F3}} \end{figure} This comparison illustrates a number of important points: First, the fireballs created in the so-called ``small'' collision systems p-Pb and p-p are only initially small, due to the small cross section of the proton. As long as roughly the same total entropy $dS/d\eta_s$ is deposited initially, they all have roughly the same (much larger) size at hadronization and at freeze-out. This is easy to understand: since constant temperature implies constant density, identical multiplicities must correspond to identical volumes. Therefore, events with the same final multiplicity $dN_\mathrm{ch}/d\eta$ have the same freeze-out volume, irrespective of how dilute or compact the fireball's initial configuration was. Second, however, if the initial entropy $dS/d\eta_s$ is initially deposited within a smaller transverse area, the larger transverse pressure gradients in this more compact initial configuration drives stronger radial transverse flow \cite{Hirono:2014dda,Kalaydzhyan:2015xba}, reflected in the larger rate of growth of the outer radius of the isotherms in the center-left and center-right columns of Fig.~\ref{F3} compared to the left column. The resulting higher expansion rate reduces the ``Hubble-volume'' of the expanding fireball whose dimensions (``HBT radii'') can be measured with two-particle intensity interferometry \cite{Heinz:1996rw}. At the same final multiplicity and volume, therefore, p-p collisions feature smaller HBT radii than p-Pb collisions, and p-Pb collisions have smaller HBT radii than Pb-Pb collisions. This effect has been observed and noted by the ALICE Collaboration (see Fig.~9 in \cite{Adam:2015pya}). A popular criterium for the validity of fluid dynamics is the Knudsen number, defined as the ratio between the microscopic interaction length (``mean free path'') and the macroscopic hydrodynamic length scale. In expanding systems, this macroscopic length scale is not given by the total radius of the fireball (related to the total freeze-out volume) but by its Hubble radius (which can be expressed through the expansion rate or through appropriately normalized space-time gradients of the energy or entropy density). This suggests that any phenomenological criterium for applicability of hydrodynamics should involve, in addition to the observed charged hadron multiplicity $dN_\mathrm{ch}/d\eta$ (as a proxy for the initial entropy density $dS/d\eta_s$), the HBT radii or, better, the cube root of the HBT volume $\left |\det(R_{ij}^2)\right |^{1/6}$ (as a proxy for the Hubble radius). As an aside I note that the higher expansion rate causes stronger viscous heating in the ``small'' collision systems. So, if the three systems shown in Fig.~\ref{F3} had been initialized with the same initial entropy per unit rapidity $dS/d\eta_s$, the final entropy (and thus $dN_\mathrm{ch}/d\eta$) and final total volumes would be somewhat larger for p-Pb than for Pb-Pb, and still larger for p-p collisions. This would have further exacerbated the above-mentioned effects on the radial flow and HBT radii. That initially denser collision systems develop stronger radial flow, as predicted by hydrodynamics, can be directly seen in Fig.~\ref{F4}. \begin{figure}[h] \includegraphics[width=0.33\linewidth]{./figs/EPOS_pions} \includegraphics[width=0.33\linewidth]{./figs/EPOS_kaons} \includegraphics[width=0.33\linewidth]{./figs/EPOS_protons} \caption{\small Transverse momentum spectra of pions, kaons and protons from p-Pb collisions at 5.02\,TeV with different numbers of charged tracks in $|\eta|<2.4$ (8 (dots), 84 (squares), 160 (triangles), 235 (inverted triangles)) as measured by CMS \cite{Chatrchyan:2013eya}. The lines show numerical simulations with the EPOS3.076 code which features a viscous fluid dynamic core \cite{Werner:2013tya}. Similar behavior was found in by CMS in p-p collisions at 0.9, 2.76 and 7\,TeV \cite{Chatrchyan:2012qb}. Figure taken from \cite{Werner:2013tya}. \label{F4} } \end{figure} It shows pion, kaon and proton spectra from 5.02\,TeV p-Pb collisions measured by CMS for four multiplicity bins. As the multiplicity increases, the spectra become flatter (``harder''), and the effect increases with particle rest mass. This is a hallmark of radial flow \cite{Lee:1990sk,Schnedermann:1993ws} which (i) increases with the initial entropy density and (ii) pushes hadrons out to larger $p_T$ values by an amount that increases with their rest mass. For the same final multiplicity and collision energy, the effect is stronger in p-p collisions \cite{Chatrchyan:2012qb} than in p-Pb collisions \cite{Chatrchyan:2013eya}. The measured behavior can be quantitatively described by EPOS3.076 which has a viscous hydrodynamic core. That same model also describes the elliptic flow (``double ridge'') discovered by CMS in high-multiplicity p-p collisions at 7\,TeV \cite{Khachatryan:2010gv} and later confirmed by both ATLAS \cite{Aad:2015gqa} and CMS \cite{Khachatryan:2015lva} in p-p collisions at 13\,TeV, whose collective nature was demonstrated by CMS by measuring it with 4- and 6-particle cumulants \cite{Khachatryan:2016txc}. \section{Far-from-equilibrium hydrodynamics} The discussion above has established that (i) viscous fluid dynamics is phenomenologically very successful and yields quantitatively precise descriptions of and predictions for soft-hadron spectra and flow correlations in Au-Au at RHIC and Pb-Pb at the LHC while providing at least a semiquantitative description of the same observables in p-Au, d-Au, $^3$He-Au, p-Pb and even high-multiplicity p-p collisions, while at the same time (ii) being characterized by large dissipative effects caused by the extremely rapid and anisotropic expansion of the heavy-ion collision fireballs. In fact, the approach is now being successfully used for extracting, with quantified uncertainties, key parameters characterizing the thermodynamic and transport properties of quark-gluon plasma from a global model-to-data comparison with advanced Bayesian statistical analysis tools \cite{Bernhard:2016tnd,Bernhard:2018hnz}. Why do these large dissipative corrections not destroy the precision and predictive power of the hydrodynamic approach? In the last part of my presentation I will cover some work that my collaborators and I have performed over the last few years to address the particular challenges faced by hydrodynamic approaches when applied to relativistic heavy-ion collisions. These studies uncovered several surprises that showed that the hydrodynamic approach is much more robust and resilient than originally expected. We now understand that its applicability requires neither local thermalization ({\it i.e.}\ thermalized exponential momentum distributions in the local rest frame) nor even local momentum isotropy. This is a dramatic change in our understanding compared to 20 years ago when it was believed (certainly by me!) that the good agreement between ideal fluid dynamics and RHIC data \cite{Ackermann:2000tr} implied very short thermalization times of order $<1$\,fm/$c$ \cite{Heinz:2001xi}. We now understand that this time characterizes the time scale of ``hydrodynamization'' at which the system enters the region of validity of a second-order viscous hydrodynamic approach, rather than real local thermalization at which the fluid would obey the laws of ideal fluid dynamics. In other words, \textbf{\textit{dissipative hydrodynamics works even far from local thermal equilibrium, with quantitative precision}}. Ultra-relativistic heavy-ion collision dynamics pose two specific challenges to the applicability of dissipative fluid dynamics \cite{Strickland:2014pga, Heinz:2015gka, McNelis:2018jho}: (i) a large shear-viscous stress, in the form of a large difference $P_\perp{-}P_L$ between the transverse and longitudinal pressures, caused by large initial anisotropies between the longitudinal and transverse expansion rates, and (ii) a possibly large bulk viscous pressure $\Pi$ caused by critical dynamics near the quark-hadron phase transition. Optimized hydrodynamic approaches, such as anisotropic hydrodynamics \cite{Martinez:2010sc, Florkowski:2010cf, Bazow:2013ifa, Strickland:2014pga, Tinti:2015xwa, Molnar:2016gwq, McNelis:2018jho} can handle these challenges more efficiently than standard dissipative fluid dynamics. Hydrodynamics is an effective theory whose form is independent of the strength of the microscopic interactions. Hydrodynamic equations can thus be derived from kinetic theory in a window of weak coupling and small pressure gradients where both approaches are simultaneously valid. Only the values of the transport coefficients and the equation of state depend on the microscopic coupling strength; for the strongly coupled quark-gluon plasma created in heavy-ion collisions, they must be obtained with non-perturbative methods. In kinetic theory, the conserved macroscopic currents $j^\mu(x)=\langle p^\mu\rangle(x)$ (particle current) and $T^{\mu\nu}(x)=\langle p^\mu p^\nu\rangle(x)$ (energy-momentum tensor) are obtained by taking momentum moments $\langle O(p)\rangle(x)\equiv\frac{g}{(2\pi)^3}\int\frac{d^3p}{E_p}\, O(p) f(x,p)$ of the distribution function $f(x,p)$. Hydrodynamic equations are obtained by splitting the distribution function into a leading-order contribution $f_0$, parametrized through macroscopic observables as \begin{equation} \label{eq1} f_0(x,p)=f_0\left(\frac{\sqrt{p_\mu\Omega^{\mu\nu}(x)p_\nu}-\tilde\mu(x)}{\tilde{T}(x)}\right), \end{equation} and a smaller first-order correction $\delta f$ ($|\delta f/f_0|\ll1$): \begin{equation} \label{eq2} f(x,p)=f_0(x,p)+\delta f(x,p). \end{equation} In Eq.~(\ref{eq1}), $p_\nu\Omega^{\mu\nu}(x)p_\nu=m^2+\bigl(1+\xi_\perp(x)\bigr) p_{\perp,\mathrm{LRF}}^2 + \bigl(1+\xi_L(x)\bigr) p_{z,\mathrm{LRF}}^2$, where the hydrodynamic flow field $u^\mu(x)$ defines the local fluid rest frame (LRF). $\tilde{T}(x)$ and $\tilde{\mu}(x)$ are the effective temperature and chemical potential in the LRF, Landau matched to the energy and particle densities, $e$ and $n$ \cite{Heinz:2015gka}. $\xi_{\perp,L}$ parametrize the momentum anisotropy in the LRF and are Landau matched to the transverse and longitudinal pressures, $P_T$ and $P_L$ \cite{Tinti:2015xwa, Molnar:2016gwq, McNelis:2018jho}. The latter encode the bulk viscous pressure $\Pi=(2P_\perp{+}P_L)/3-P_\mathrm{eq}$ and the largest shear stress component $P_\perp{-}P_L$. In anisotropic hydrodynamics, $P_\perp$ and $P_L$ evolve macroscopically according to equations that reflect the competition between macroscopic anisotropic expansion (driving the system away from local equilibrium and momentum isotropy) and microscopic scattering (trying to restore them) \cite{McNelis:2018jho}. Using the decomposition (\ref{eq2}) we write $T^{\mu\nu}= T^{\mu\nu}_0+\delta T^{\mu\nu}\equiv T^{\mu\nu}_0+\Pi^{\mu\nu}$, $j^\mu= j^\mu_0+\delta j^\mu\equiv j^\mu_0+V^\mu$. Different hydrodynamic approaches can be characterized by the assumptions they make about the dissipative corrections and/or the approximations they use to derive their dynamics from the underlying Boltzmann equation:\\[0.5ex] {\bf 1.\ Ideal hydrodynamics} assumes local momentum isotropy, setting $f_0$ to be isotropic ($\xi_{\perp,L}=0$) and all dissipative currents to zero: $\Pi^{\mu\nu}=V^\mu=0$.\\[0.5ex] {\bf 2.\ Navier-Stokes (NS) theory} maintains local momentum isotropy at leading order and postulates instantaneous constituent relations for $\Pi^{\mu\nu}$ and $V^\mu$ by introducing viscosity and heat conduction as transport coefficients that relate these flows to their driving forces. It ignores the microscopic relaxation time that is needed for these flows to adjust to their Navier-Stokes values, leading to acausal signal propagation.\\[0.5ex] {\bf 3.\ Israel-Stewart (IS) theory} \cite{Israel:1979wp} improves on NS theory by evolving $\Pi^{\mu\nu}$ and $V^\mu$ dynamically, with evolution equations derived from moments of the Boltzmann equation, keeping only terms linear in the Knudsen number $\mathrm{Kn}=\lambda_\mathrm{mfp}/\lambda_\mathrm{macro}$.\\[0.5ex] {\bf 4.\ Denicol-Niemi-Molnar-Rischke (DNMR) theory} \cite{Denicol:2012cn} improves IS theory by keeping nonlinear terms up to order $\mathrm{Kn}^2$ and $\mathrm{Kn}\cdot\mathrm{Re}^{-1}$ when evolving $\Pi^{\mu\nu}$ and $V^\mu$.\\[0.5ex] {\bf 5.\ Third-order Chapman-Enskog expansion} \cite{Jaiswal:2013vta} keeps terms of up to third order when evolving $\Pi^{\mu\nu}$ and $V^\mu$.\\[0.5ex] {\bf 6.\ Anisotropic hydrodynamics ({\sc aHydro})} \cite{Martinez:2010sc, Florkowski:2010cf} allows for a leading-order local momentum anisotropy ($\xi_{\perp,L}\ne0$), evolved according to equations obtained from low-order moments of the Boltzmann equation, but ignores residual dissipative flows: $\Pi^{\mu\nu}=V^\mu=0$.\\[0.5ex] {\bf 7.\ Viscous anisotropic hydrodynamics ({\sc vaHydro})} \cite{Bazow:2013ifa, Bazow:2015cha} improves on {\sc aHydro} by additionally evolving (using IS or DNMR theory) the residual dissipative flows $\Pi^{\mu\nu},\,V^\mu$ generated by the deviation $\delta f$ around the locally anisotropic leading distribution function $f_0$. There exist a few highly symmetric situations for which the Boltzmann equation, in Relaxation Time Approximation (RTA), can be solved exactly. These include the Bjorken \cite{Bjorken:1982qr} and Gubser \cite{Gubser:2010ze} flows which are (although highly idealized) relevant for heavy-ion collisions \cite{Denicol:2014xca, Denicol:2014tha}. While for Bjorken expansion the expansion rate decreases with longitudinal proper time $\tau$ like $1/\tau$, allowing the system to asymptotically reach local momentum isotropy and thermal equilibrium, Gubser expansion includes an additional strong transverse flow which leads to an asymptotically constant expansion rate and exponentially growing Knuden number Kn \cite{Denicol:2014tha}, and thus to asymptotic free-streaming. The exact evolution of the macroscopic currents $T^{\mu\nu}$ and $j^\mu$ associated with these solutions can be compared with that predicted by any of the 7 different hydrodynamic approximations listed above and thus be used to assess the accuracy of the latter in these two opposite extremes of asymptotic evolution. To illustrate the differences between the different hydrodynamic approximations, we briefly summarize the corresponding evolution equations for the shear stress. (For both Bjorken and Gubser flow with a conformal equation of state $\Pi^{\mu\nu}$ has only one independent component, the shear stress $\pi^{\eta\eta}$.) For Gubser flow (where all macroscopic quantities depend on only one space-time variable, the de Sitter time $\rho$ \cite{Gubser:2010ze}) one finds the following \cite{Marrochio:2013wla, Denicol:2014xca, Denicol:2014tha, Nopoush:2014qba, Martinez:2017ibh, Chattopadhyay:2018apf} (a similar discussion for Bjorken flow can be found in \cite{Florkowski:2013lza, Florkowski:2014sfa, Bazow:2013ifa}):\footnote{All quantities with hats have been made unitless by multiplying with appropriate powers of the proper time $\tau$.}\\[0.5ex] {\bf 1.\ Ideal hydrodynamics} gives $\hat{T}_{\mathrm{ideal}}(\rho) = \frac{\hat{T}_0}{\cosh^{2/3}(\rho)}$, combined with zero shear stress, $\hat{\pi}^{\eta\eta}{\,=\,}0$.\\[0.5ex] \begin{figure}[b!] \vspace*{-3mm} \centering \includegraphics[width=0.91\linewidth]{./figs/BjGubserScaling} \vspace*{-3mm} \caption{\small Normalized shear stress $\pi/(e{+}P_\mathrm{eq})=\frac{1}{4}(\pi/P_\mathrm{eq})$ (left column) and pressure anisotropy $P_L/P_\perp$ (right column) for Bjorken (upper row) and Gubser flow (lower row), plotted as functions of a rescaled time variable $\tilde w$ which for Bjorken flow corresponds to the inverse Knudsen number, $\tilde w{\,=\,}\rm{Kn}^{-1}$, and for Gubser flow directly to the Knudsen number, $\tilde w$\,=\,Kn \cite{Chattopadhyay:2018apf}. \label{F5} } \end{figure} {\bf 2.-7.} For all dissipative hydrodynamic frameworks the temperature evolves instead according to the differential equation $\frac{1}{\hat{T}}\frac{d\hat{T}}{d\rho}+\frac{2}{3}\tanh \rho = \frac{1}{3}\bar{\pi}_{\eta}^{\eta}(\rho )\tanh\rho$ \cite{Gubser:2010ze,Marrochio:2013wla} where $\bar{\pi}\equiv \hat{\pi}_\eta^\eta/(\hat{T}\hat{s})$. Differences between the approaches arise from their different evolution of the shear stress. In\\ {\bf 2.\,NS theory} the shear stress is given by the (instantaneous) constituent relation $\hat{\pi}_{NS}^{\eta\eta}=\frac{4}{15}\hat{\tau}_\mathrm{rel}\tanh \rho$ \cite{Denicol:2014tha} where $\hat{\tau}_\mathrm{rel}=\mathrm{const.}/\hat{T}$. In all second and higher-order hydrodynamic approaches the shear stress evolves instead according to a differential equation of the type \begin{equation} \label{eq3} d\bar{\pi}_{\eta}^{\eta}/d\rho + \bar{\pi}_\eta^\eta/\hat{\tau}_\mathrm{rel} = \bigl(a_1 + a_2 \bar{\pi} - a_3 \bar{\pi}^{2}\bigr)\tanh \rho - \frac{4}{3} {\cal F}(\bar{\pi}). \end{equation} For the approaches {\bf 3., 4., 5.} ({\it i.e.} as long as the equations are derived by expanding around a locally isotropic distribution function $\xi_{\perp,L}{\,=\,}0$) the function ${\cal F}$ vanishes: ${\cal F}(\bar{\pi})=0$. Only for {\bf 7.} {\sc vaHydro} ${\cal F}(\bar{\pi})$ is nonzero; its specific form is found in \cite{Martinez:2017ibh}.\footnote{The {\sc aHydro} study in \cite{Nopoush:2014qba} expresses the evolution of the shear stress in terms of the microscopic parameters $\xi_{\perp,L}$ and thus cannot be directly compared to the macroscopic evolution equation (\ref{eq3}).} For the constants $(a_1,a_2,a_3)$ one finds:\\[0.5ex] {\bf 3.\ IS theory:} $(a_1,a_2,a_3) = \left(\frac{4}{15},\, 0,\,\frac{4}{3}\right)$.\\ {\bf 4.\ DNMR theory:} $(a_1,a_2,a_3) = \left(\frac{4}{15},\, \frac{10}{21},\,\frac{4}{3}\right)$.\\ {\bf 5.\ Third-order Chapman-Enskog expansion:} $(a_1,a_2,a_3) = \left(\frac{4}{15},\, \frac{10}{21},\,\frac{412}{147}\right)$ \cite{Chattopadhyay:2018apf}.\\ {\bf 6.\ Anisotropic hydrodynamics ({\sc aHydro}):} See footnote 4.\\ {\bf 7.\ Viscous anisotropic hydrodynamics {\sc vaHydro}:} $(a_1,a_2,a_3) = \left(\frac{5}{12},\, \frac{4}{3},\,\frac{4}{3}\right)$ \cite{Martinez:2017ibh}. Figure~\ref{F5} shows, for thermal equilibrium initial conditions, the time evolution of the normalized shear stress $\bar\pi$ and the pressure anisotropy $P_L/P_\perp$ for Bjorken and Gubser flows, for three systems with specific shear viscosities $4\pi\eta/s=4\pi T\tau_\mathrm{rel}/5=1,\,3$, and 10 \cite{Chattopadhyay:2018apf}. For clarity, the exact solution of the Boltzmann equation (green solid lines) is compared only with the two best-performing hydrodynamic approximations, the third-order Chapman-Enskog expansion (red dotted lines) and anisotropic hydrodynamics (blue dashed lines) [where in this case, due to the high degree of symmetry of the flow, {\sc aHydro} and {\sc vaHydro} correspond to the same approximation \cite{Martinez:2017ibh}]. Similar comparisons for the other hydrodynamic approximations discussed in this contribution can be found in the literature \cite{Marrochio:2013wla, Florkowski:2013lya, Florkowski:2013lza, Bazow:2013ifa, Denicol:2014xca, Denicol:2014tha, Nopoush:2014qba, Florkowski:2014sfa, Strickland:2014pga, Heller:2016rtz, Martinez:2017ibh, Romatschke:2017vte, Alqahtani:2017mhy, Strickland:2017kux, Chattopadhyay:2018apf}. Following \cite{Heller:2016rtz} the time variables $\tau$ (for Bjorken flow) and $\rho$ (for Gubser flow) are replaced by a scaling variable $\tilde{w}$, defined as the product of the macroscopic expansion rate (${=\,}1/\tau$ in the case of Bjorken flow and ${=\,}2\tanh\rho$ for Gubser flow) with the microscopic relaxation time $\tau_\mathrm{rel}{\,=\,}4\pi\eta/(Ts)$ (Gubser flow), or as its inverse (Bjorken flow). The idea behind this rescaling is that for Bjorken flow the system approaches thermalization ({\it i.e.}\ a regime of small Knudsen numbers) at late times while for Gubser flow it becomes asymptotically free-streaming ({\it i.e.}\ approaches a regime of large Knudsen numbers at late times). The rate of this approach scales with the microscopic relaxation time. Figure~\ref{F5} shows that for Bjorken flow (top row) the solutions of the Boltzmann equation and of the two hydrodynamic approximations shown in the plot approach a common attractor \cite{Heller:2015dha} at late times where the system approaches local momentum isotropy and thermal equilibrium. Different initial conditions relax exponentially towards this attractor. However, the hydrodynamic models describe the exact Boltzmann dynamics well even at early times where, for $\eta/s{\,=\,}10/(4\pi)$, the systems moves very far away from equilibrium, as witnessed by the shear pressure becoming almost as large as the thermal pressure (corresponding to a large inverse Reynolds number Re$^{-1}={\cal O}(1)$). In the bottom row of the figure one sees that the approach to a common late-time attractor persists in the case of Gubser flow, at least for anisotropic hydrodynamics while the third-order Chapman-Enskog approach approaches an incorrect asymptotic value for the inverse Reynolds number ($\pi/P_\mathrm{eq}\to1.6$ instead of 2). Still, the third-order Chapman-Enskog approach performs much better than all other hydrodynamic approximation schemes that are based on expansions around local momentum isotropy. That anisotropic hydrodynamics correctly reproduces even the asymptotic free-streaming limit of Gubser flow is a striking counterexample to the folklore that hydrodynamics can only be applied to systems that are close to local momentum isotropy and thermal equilibrium. Still, the high quality of the anisotropic hydrodynamic description of Gubser flow by {\sc aHydro} (blue-dashed lines) may be somewhat accidental in that Gubser symmetry may produce phase-space distributions that are particularly well adjusted to being decomposed as in Eqs.\ (\ref{eq1}) and (\ref{eq2}). Upcoming (3+1)-dimensional studies \cite{McNelis:2018jho} will shed further light on this issue. \section*{References}
{ "timestamp": "2019-04-16T02:12:40", "yymm": "1904", "arxiv_id": "1904.06592", "language": "en", "url": "https://arxiv.org/abs/1904.06592" }
\section{Supplemental information: Solitons explore the quantum classical boundary} \ssection{Complete Two-mode model} The position dependent terms omitted in the main article are: \begin{align} \hat{H} &= E_0( \hat{a}^\dagger \hat{a} + \hat{b}^\dagger \hat{b} )+ \frac{\chi}{2} ( \hat{a}^\dagger \hat{a}^\dagger \hat{a} \hat{a} + \hat{b}^\dagger \hat{b}^\dagger \hat{b} \hat{b}) \nonumber\\[0.15cm] &+J (\hat{b}^\dagger \hat{a} + \hat{a}^\dagger \hat{b}) + \bar{U}(4 \hat{a}^\dagger \hat{a} \hat{b}^\dagger \hat{b}+ \hat{a}^\dagger \hat{a}^\dagger \hat{b} \hat{b} +\hat{b}^\dagger \hat{b}^\dagger \hat{a} \hat{a} ) \nonumber\\[0.15cm] &+2\bar{J}(\hat{a}^\dagger \hat{a} + \hat{b}^\dagger \hat{b} -1 )(\hat{b}^\dagger \hat{a} + \hat{a}^\dagger \hat{b}), \label{twomodeHamil} \end{align} with coefficients \begin{align} E_0&=\int dx \: \bar{L}(x)\left[-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}\right]\bar{L}(x),\\ \chi&=U_0\int dx \: \bar{L}(x)^4=-\frac{m U_0^2 \sub{N}{sol}}{6 \hbar^3}, \\ J&=\int dx \: \bar{L}(x)\left[-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}\right]\bar{R}(x),\\ \bar{U}&=\frac{U_0}{2}\int dx \: \bar{L}(x)^2 \bar{R}(x)^2, \\ \bar{J}&=\frac{U_0}{2}\int dx \:\bar{L}(x)^3 \bar{R}(x). \label{twomodecoeffsl} \end{align} \ssection{Number-momentum entanglement} An equal number of atoms, $\sub{N}{sol}$, are contained in the two incoming solitons with momenta $p_{0}$ and -$p_{0}$ per atom, thus the initial total net momentum is zero. At the moment of collision, due to close proximity of solitons tunnelling is likely. Let us assume $a$ atoms are transferred from the left to the right soliton. If we denote the outgoing momenta per atom by $p_+$ and -$p_-$, conservation of momentum requires: % \begin{align} (N_{sol}+a) p_{+} - (N_{sol}-a )p_{-} = 0. \label{momcons} \end{align} Together with energy conservation \begin{align} \sub{N}{sol} \frac{p_0^2}{m} + \chi \sub{N}{sol}^2 &=(\sub{N}{sol}+a)\frac{p_+^2}{2m} + \chi\frac{(\sub{N}{sol}+a)^2}{2}\nonumber\\[0.15cm] &+(\sub{N}{sol}-a)\frac{p_-^2}{2m} + \chi\frac{(\sub{N}{sol}-a)^2}{2}, \label{encons} \end{align} the equations \bref{momcons} can be solved to yield momenta of atoms in outgoing solitons $p_\pm$. We find \begin{align} p_{+}& = \pm \frac{\sqrt {a-N_{sol}} \sqrt {a^{2} m \chi - p_{0}^{2} N_{sol} }}{\sqrt{a N_{sol}+ N_{sol}^{2}}}. \end{align} with matching velocities $v=p_+/m$ shown in Fig.~3. \end{document}
{ "timestamp": "2019-05-07T02:28:14", "yymm": "1904", "arxiv_id": "1904.06552", "language": "en", "url": "https://arxiv.org/abs/1904.06552" }
\section{Introduction} The surprising discovery of Turbo-codes \cite{berrou1993near} in the early 90's was a major breakthrough in the field of digital communication. Two simple codes combined with an interleaver can be decoded in a nearly optimal way with loopy belief-propagation~(BP)~\cite{pearl,mceliceBP} so that they operate close to Shannon's channel capacity \cite{shannon1949mathematical}. This lead to the rediscovery of LDPC codes \cite{gallager1962low} and to the investigation of more general constructions like Trellis-Constrained Codes (TCCs) \cite{frey1997trellis,franck2016some}. However, it turns out that near optimal decoding with BP only works for some specific classes of TCCs, but not in general. In this paper we describe a method for the probabilistic computation of the most likely codeword in a TCC w.r.t. a vector of symbol likelihoods. We iteratively update the symbol likelihoods so that the relative likelihood of the most likely codeword continually increases until it hopefully stands out from all other codewords. The algorithm is insipred by amplitude amplification \cite{brassard1997exact,brassard2002quantum} which is used in quantum algorithms like Grover search~\cite{grover1996fast}. Our algorithm converges in a more controlled way than BP. \begin{figure} \begin{centering} \subfloat[A TCC constructed from convolutional codes.]{\begin{centering} \includegraphics[width=1\columnwidth]{ImageIC} \par\end{centering} \label{Fig:TCC-conv}} \par\end{centering} \begin{centering} \subfloat[A TCC representation of a LDPC code.]{\begin{centering} \includegraphics[width=1\columnwidth]{ImageLDPC} \par\end{centering} \label{Fig:TCC-ldpc}} \par\end{centering} \caption{Examples of Trellis-Constrained Codes.} \label{Fig:TCCs} \end{figure} \section{Preliminaries} An intersection code $\mathbb{C}_{\cap}$ is defined as \[ \mathbb{C}_{\cap}:=\{\mathbf{c}:\mathbf{c}\in\mathbb{C}_{1}\cap\mathbb{C}_{2}\}, \] where $\mathbb{C}_{1},\mathbb{C}_{2}\subseteq\mathbb{S}=\{-1,+1\}^{n}$ are chosen such that the code $\mathbb{C}_{1}$ and the interleaved code $\mathbb{C}_{2}$ have a low trellis complexity. Some examples of TCCs are represented in Fig.~\ref{Fig:TCCs}. For a memoryless binary channel defined by $\gamma$, a received word $\mathbf{r}=(r_{1},...,r_{n})\in\mathbb{R}^{n}$ and a word $\mathbf{s}=(s_{1},...,s_{n})\in\mathbb{S}$, we define the log-likelihood ratio \[ L(r):=\frac{1}{2}\ln\frac{P(r|+1)}{P(r|-1)}\text{ with }P(r|s):=\gamma^{rs}, \] and we use Iverson brackets \cite{knuth1992two} \[ \langle\mathtt{false}\rangle:=0\text{ and }\langle\mathtt{true}\rangle:=1 \] to define the code-constrained likelihoods \[ P_{\cap}(\mathbf{r}|\mathbf{s}):=\gamma^{\mathbf{r}\mathbf{s}^{T}}\langle\mathbf{s}\in\mathbb{C}_{1}\rangle\langle\mathbf{s}\in\mathbb{C}_{2}\rangle=\gamma^{\mathbf{r}\mathbf{s}^{T}}\langle\mathbf{s}\in\mathbb{C}_{\cap}\rangle. \] More details on the channels can be found in the Appendix. \section{Likelihood Amplification\label{sec:Relative-Likelihood-Amplificatio}} The objective of an ML decoder is to determine \[ \check{\mathbf{c}}={\displaystyle \arg\max_{\mathbf{s}\in\mathbb{S}}}\,P_{\cap}(\mathbf{r}|\mathbf{s})={\displaystyle \arg\max_{\mathbf{s}\in\mathbb{C}_{\cap}}}\,\gamma^{\mathbf{r}\mathbf{s}^{T}}. \] To reflect the structure of $\mathbb{C}_{\cap}$, with the two contituent codes and the symbol constraints, we can equivalently write \[ (\check{\mathbf{c}},\check{\mathbf{c}})={\displaystyle \arg\max_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}}\gamma^{\mathbf{w}_{1}\mathbf{s}^{T}+\mathbf{w}_{2}\mathbf{s'^{T}}}\cdot\prod_{j=1}^{n}\langle s_{j}=s'_{j}\rangle \] where $\mathbf{w}_{1}+\mathbf{w}_{2}=\mathbf{r}$ with $\mathbf{w}_{1},\mathbf{w}_{2}\in\mathbb{R}^{n}$. \subsection{Overview} During the decoding process we iteratively update $\mathbf{w}_{1}$ and $\mathbf{w}_{2}$, and in this description we denote the corresponding values in iteration $i$ as $\mathbf{w}_{1}^{(i)}$ and $\mathbf{w}_{2}^{(i)}$. Further we consider \begin{itemize} \item the likelihood of the most likely codeword \[ p_{\check{\mathbf{c}}}^{(i)}:=\gamma^{\mathbf{w}_{1}^{(i)}\check{\mathbf{c}}^{T}+\mathbf{w}_{2}^{(i)}\check{\mathbf{c}}^{T}}\text{, and} \] \item the cumulated likelihood of all words in $\mathbb{C}_{1}\times\mathbb{C}_{2}$ \[ \Xi^{(i)}:=\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\cdot \] \end{itemize} Initially, we set \begin{align*} \mathbf{w}_{1}^{(0)} & \leftarrow\mathbf{r}/2\\ \mathbf{w}_{2}^{(0)} & \leftarrow\mathbf{r}/2 \end{align*} and we estimate \[ p_{\check{\mathbf{c}}}^{(0)}=\gamma^{\mathbf{w}_{1}^{(0)}\check{\mathbf{c}}^{T}+\mathbf{w}_{2}^{(0)}\check{\mathbf{c}}^{T}}=\gamma^{\mathbf{r}\check{\mathbf{c}}^{T}}. \] Note that for the BEC we have $p_{\check{\mathbf{c}}}^{(0)}=\sum_{i=1}^{n}\langle r_{i}\ne0\rangle$. Then, in iterations where $i$ is even we compute \begin{align} \label{eqn:step1} \begin{split} \mathbf{w}_{1}^{(i+1)} & \leftarrow \mathbf{w}_{1}^{(i)}+\Delta^{(i)}\\ \mathbf{w}_{2}^{(i+1)} & \leftarrow \mathbf{w}_{2}^{(i)}-\Delta^{(i)} \end{split} \end{align}and in iterations where $i$ is odd we compute\begin{align} \label{eqn:step2} \begin{split} \mathbf{w}_{1}^{(i+1)} & \leftarrow \rho^{(i)}\cdot\mathbf{w}_{1}^{(i)}\\ \mathbf{w}_{2}^{(i+1)} & \leftarrow \rho^{(i)}\cdot\mathbf{w}_{2}^{(i)}. \end{split} \end{align}The corresponding $\Delta^{(i)}\in\mathbb{R}^{n}$ and $\rho^{(i)}\in\mathbb{R}$ are chosen so that \begin{equation} \frac{p_{\check{\mathbf{c}}}^{(i+1)}}{\Xi^{(i+1)}}\geq\frac{p_{\check{\mathbf{c}}}^{(i)}}{\Xi^{(i)}},\label{eq:increasing} \end{equation} which means that the relative likelihood of the most likely codeword stays the same or increases in every step. Details and stopping criteria are expained in the following sections. \subsection{Choice of $\Delta^{(i)}$} In order to chose $\Delta^{(i)}$ so that (\ref{eq:increasing}) holds, let us investigate the relation between $p_{\check{\mathbf{c}}}^{(i+1)}$ and $p_{\check{\mathbf{c}}}^{(i)}$, and between $\Xi^{(i+1)}$ and $\Xi^{(i)}$ in (\ref{eqn:step1}). First, we have \begin{equation} p_{\mathbf{\hat{\mathbf{c}}}}^{(i+1)}=\gamma^{(\mathbf{w}_{1}^{(i)}+\Delta^{(i)}+\mathbf{w}_{2}^{(i)}-\Delta^{(i)})\check{\mathbf{c}}^{T}}=p_{\mathbf{\hat{\mathbf{c}}}}^{(i)}\label{eq:delta1} \end{equation} for any $\Delta^{(i)}\in\mathbb{R}^{n}$. The same holds for all codewords in $\mathbb{C}_{\cap}$, so that the most likely word in $\mathbb{C}_{\cap}$ under $\mathbf{r}$ also remains the most likely word under $\mathbf{w}_{1}$ and $\mathbf{w}_{2}$. Then, to understand the relation between $\Xi^{(i)}$ and $\Xi^{(i+1)}$ let us assume \[ \Delta^{(i)}=(0,...,0,\delta_{j},0,...,0) \] with a single possibly non zero value $\delta_{j}$ at position $j$ and \[ \Xi^{(i)}=\Xi_{-1}^{(i)}+\Xi_{0}^{(i)}+\Xi_{+1}^{(i)} \] where \begin{align*} & \Xi_{-1}^{(i)}:=\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\cdot\langle s_{j}=-1\rangle\cdot\langle s'_{j}=+1\rangle,\\ & \Xi_{0{\color{white}+}}^{(i)}:=\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\cdot\langle s_{j}=s'_{j}\rangle,\text{}\\ & \Xi_{+1}^{(i)}:=\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\cdot\langle s_{j}=+1\rangle\cdot\langle s'_{j}=-1\rangle. \end{align*} It follows from (\ref{eqn:step1}) that \[ \Xi^{(i+1)}=\gamma^{-2\delta_{j}}\cdot\Xi_{-1}^{(i)}+\Xi_{0}^{(i)}+\gamma^{2\delta_{j}}\cdot\Xi_{+1}^{(i)}. \] Thus, for $\delta_{j}=0$ we have $\Xi^{(i+1)}=\Xi^{(i)}$, and for $\delta_{j}$ equal to \begin{equation} \delta_{j\min}=\arg\min_{\delta_{j}}(\Xi^{(i+1)})=(\log_{\gamma}\Xi_{-1}^{(i)}-\log_{\gamma}\Xi_{+1}^{(i)})/4\label{eq:deltamin} \end{equation} we obtain a minimal $\Xi^{(i+1)}$ for which \begin{equation} \Xi^{(i+1)}\leq\Xi^{(i)}.\label{eq:delta2} \end{equation} Hence, we can pick a position $j$ and compute $\Delta^{(i)}$ so that (\ref{eq:delta1}) and (\ref{eq:delta2}) hold. This implies that (\ref{eq:increasing}) must also hold. Like in quantum computing the decoder does not care whether the symbols $s^{(1)}=s^{(2)}$ are both~0 or both~1, the relative likelihood of both states are increased. The effect of a such an optimization is illustrated in Figure~\ref{Fig:Optimization}. In practice it can be more efficient to compute $\delta_{1},...,\delta_{n}$ for all symbols at once, according to (\ref{eq:deltamin}), and to use $\Delta^{(i)}=(\kappa\cdot\delta_{1},...,\kappa\cdot\delta_{n})$, where $\kappa$ is a scaling factor that is used to prevent a too big step that could result from correlations. \begin{figure} \begin{centering} \subfloat[iteration $i$]{\begin{centering} \begin{tikzpicture} \draw[->] (0,0) -- (4,0); \draw[->] (0,0) -- (0,2); \draw (1,0)[line width=3pt] -- (1,1.00); \draw (2,0)[line width=3pt] -- (2,1.75); \draw (3,0)[line width=3pt] -- (3,0.5); \draw (1,0) node[anchor=north] {$\Xi^{(i)}_{-1}$}; \draw (2,0) node[anchor=north] {$\Xi^{(i)}_{0}$}; \draw (3,0) node[anchor=north] {$\Xi^{(i)}_{+1}$}; \end{tikzpicture} \par\end{centering} }\subfloat[iteration $i+1$]{\begin{centering} \begin{tikzpicture} \draw[->] (0,0) -- (4,0); \draw[->] (0,0) -- (0,2); \draw (1,0)[line width=3pt] -- (1,0.60); \draw (2,0)[line width=3pt] -- (2,1.75); \draw (3,0)[line width=3pt] -- (3,0.60); \draw (1,0) node[anchor=north] {$\Xi^{(i+1)}_{-1}$}; \draw (2,0) node[anchor=north] {$\Xi^{(i+1)}_{0}$}; \draw (3,0) node[anchor=north] {$\Xi^{(i+1)}_{+1}$}; \end{tikzpicture} \par\end{centering} } \par\end{centering} \centering{}\caption{Exemplary relation between the values $\Xi_{-1},\Xi_{0},\Xi_{+1}$ in iteration $i$ and $i+1$, for $\Delta^{(i)}=(0,...,\delta_{j\min},...,0)$ with (\ref{eq:deltamin}). It always holds that $\Xi_{-1}^{(i+1)}=\Xi_{+1}^{(i+1)}$ and $\Xi^{(i+1)}\protect\leq\Xi^{(i)}$.} \label{Fig:Optimization} \end{figure} \subsection{Choice of $\rho^{(i)}$} In order to chose $\rho^{(i)}$ so that (\ref{eq:increasing}) holds, let us investigate the relation between $p_{\check{\mathbf{c}}}^{(i+1)}$ and $p_{\check{\mathbf{c}}}^{(i)}$, and between $\Xi^{(i+1)}$ and $\Xi^{(i)}$ in (\ref{eqn:step2}). First, we have \[ p_{\mathbf{\hat{\mathbf{c}}}}^{(i+1)}=\gamma^{(\rho\cdot\mathbf{w}_{1}^{(i)}+\rho\cdot\mathbf{w}_{2}^{(i)})\check{\mathbf{c}}^{T}}=(p_{\mathbf{\hat{\mathbf{c}}}}^{(i)})^{\rho}. \] The same holds for all words in $\mathbb{C}_{\cap}$ and as exponentiation is monotonous, the most likely codeword for $\mathbf{w}_{1},\mathbf{w}_{2}$ remains also the most likely codeword for $\rho\cdot\mathbf{w}_{1},\rho\cdot\mathbf{w}_{2}$. Concerning the relation between $\Xi^{(i+1)}$ and $\Xi^{(i)}$, it is obvious that for $\rho=1$ we have $\Xi^{(i+1)}=\Xi^{(i)}$, but there is no simple expression for $\rho\ne1$. However, as one can always compute $\Xi^{(i+1)}$ for a given $\rho$, one can try to optimize $\rho$ using e.g. a gradient technique. Most importantly, one can always ensure that (\ref{eq:increasing}) holds, by computing $p_{\mathbf{\hat{\mathbf{c}}}}^{(i+1)}$ and $\Xi^{(i+1)}$ for a given $\rho$. We propose and investigate two simple approaches in the context of our experiments in Section~\ref{sec:Experiments}. \subsection{Stopping Criteria} Decoding is successful when $\hat{\mathbf{c}}^{(i)}=(\hat{c}_{1}^{(i)},...,\hat{c}_{n}^{(i)})$ with \[ \hat{c}_{j}^{(i)}:=\text{sign}\left(\frac{\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\langle s_{j}=s'_{j}=+1\rangle}{\sum_{(\mathbf{s},\mathbf{s'})\in\mathbb{C}_{1}\times\mathbb{C}_{2}}\gamma^{\mathbf{w}_{1}^{(i)}\mathbf{s}^{T}+\mathbf{w}_{2}^{(i)}\mathbf{s}'^{T}}\langle s_{j}=s'_{j}=-1\rangle}\right) \] is contained in $\mathbb{C}_{1}$ and $\mathbb{C}_{2}$. \section{Experiments\label{sec:Experiments}} TBD \section{Conclusions} TBD \bibliographystyle{plain}
{ "timestamp": "2019-10-29T01:32:17", "yymm": "1904", "arxiv_id": "1904.06473", "language": "en", "url": "https://arxiv.org/abs/1904.06473" }
\section{Introduction} Since its inception in the 1990's ghost imaging (GI) has intrigued researchers due to its novel physical peculiarities and its potential possible applications. The typical ghost imaging setups consist of two correlated optical beams propagating in distinct paths and impinging on two spatially-separated photodetectors: the signal beam interacts with the object and then is received by a single-pixel (bucket) detector without spatial resolution, whereas the reference beam goes through an independent path and impinges on a spatial distribution detector, like charge-coupled device (CCD) without interacting with the object. Even though information from either one of the detectors used for the acquisition does not yield an image, an image can be obtained by cross-correlating signals from bucket detector and CCD. The first GI, utilizing two-photon quantum entanglement, is reported by Pittman \textit{et\ al}\cite{RN9}. Later, it was demonstrated that GI could be implemented with pseudo-thermal sources\cite{RN5,RN6,RN3,RN2} and thermal light\cite{RN7}. In addition, the computational GI (CGI) with an improved setup is proposed by Shapiro\cite{RN8,RN10}, where the reference beam is instead by a computed field pattern. With the development of GI, this concept has been extended to domains beyond the usual optical domain mentioned above and outside of the capture of spatial proprieties of light. Recently, it has been demonstrated with X-rays\cite{RN36,RN43,RN41,RN34}, atoms\cite{RN45}, and even electrons\cite{RN40} as well as temporal ghost imaging\cite{RN80,RN79,RN82,RN81,RN84,RN83,RN78}. However, up to now, no matter what type of ghost imaging method is, the way to get the output image is usually reconstructed by computer algorithm from the acquired data. Here, we proposed an alternatively novel naked-eye ghost imaging scheme to avoid computer algorithm processing, which will promote GI's convenience. In detail, a photoelectric feedback loop is used to link the bucket detector and the light source, where the intensity of light source is modulated by each output current value of the bucket detector. That is to say the traditional GI's multiplication process between output current value of the bucket detector and corresponding value of intensity distribution of reference beam is realized by this new way of photoelectric feedback loop. It is important to recognize that there is inverse correlation in our work. Meanwhile, the vision persistence effect is used to implement integral process and to generate negative images observed by naked eyes. In principle, all photosensitive material with the vision persistence effect can be competent for this integral imaging process to show the imaging result. To realize high contrast naked-eye ghost imaging, one of the challenges is overcoming the background introduced by the reference beam, since the image is immersed in the reference light beam. Toward this end, a special pattern-scanning architecture on a low-speed light-modulation mask is designed, which enables high-resolution imaging with lower-order Hadamard vectors and boosts the imaging speed as well. Moreover, two kinds of feedback circuits, the digital circuit and the analog circuit, are presented respectively, which can achieve high-speed feedback operation on the light intensity. With this approach, we demonstrate high-contrast real-time imaging for moving objects. Our work opens a new way to utilize GI and can be applied to those recently developed GI methods with the usual optical domain, X-rays, atoms and electrons or the field of LIDAR. \section{Experimental Section} \subsection{Experiment principle} Figure \ref{fig:scheme} shows the schematic diagram of the naked-eye GI imaging system. One red laser beam is modulated by a rotating light-modulation mask. Then the modulated light is divided into two beams. The reflected beam is used to naked-eye imaging. The other illuminates and interacts with objects, that are letters ``X'', ``J'', ``T'' and ``U'' with $ 35 \times 35 $ pixels, respectively. The transmitted light after objects is collected by a bucket detector comprising a collecting lens and a single-pixel photodetector. The output bucket signal is processed via the circuit and then becomes a feedback signal injecting into the laser driver, which modulates the laser intensity. So far, one loop for one pattern projection is completed. Once all loops for a group of patterns are completed in a time scale of vision persistence, a negative image will be observed by eyes at reflection arm mentioned above. In this work, we use a CCD camera to mimic the vision persistence effect of human eyes. Since the temporary retention time of human eyes is about 0.02 second in daytime vision, 0.1 second in intermediary vision and 0.2 second in night vision, we choose 0.2 second as the exposure time of CCD. At this point, a high-contrast real-time imaging will be observed by such photosensitive component. The differences from typical GI setup are that a photoelectric feedback loop is use to link the bucket detector and the light source, and the negative image can be observed directly by naked-eye at the position where the spatial distribution detector of typical GI is placed. To understand such naked-eye GI process, the imaging mechanism is shown in the following. \begin{figure}[htb] \centering \includegraphics[width=0.95\linewidth]{"fig01"} \caption{Schematic diagram of the naked-eye ghost imaging system, including a laser device, block box, bucket detector, feedback loop and naked-eye imaging. The block box with transmissivity $T_i$ is comprised of mask, lens, BS and object, respectively.} \label{fig:scheme} \end{figure} Initially, the laser beam (with intensity ${I_i}$) goes though a black box (with transmissivity $T_i$) comprising a rotating mask and an object, and then is collected by a bucket detector as shown in Figure \ref{fig:scheme}. Thus, the output value ${B_i}$ of bucket detector is given by \begin{equation}\label{B} {B_i} = {I_i} \times {T_i}. \end{equation} Then, with this result, the algebraic loop can be built as follow \begin{equation}\label{I-t} {I_{i + 1}} = f({B_i}) = f({I_i} \times {T_i}). \end{equation} From the aspect of statistics or steady-state, the Equation \ref{I-t} can be rewritten as \begin{equation}\label{I-I-K} I = f(B) = f(I \times T). \end{equation} By introducing the negative feedback circuit, one can get that the modulated laser intensity $ I $ is a monotony decrease function of $ B $ value, \begin{equation}\label{dfdB} {\frac{df(B)}{dB}} < 0. \end{equation} Therefore, the laser intensity $ I $ has a significant inverse relationship with $ T $, \begin{equation}\label{dIdK} {{dI} \over {dT}} = {{df(B)} \over {dB}}{{dB} \over {dT}} = {{df(B)} \over {dB}}I < 0. \end{equation} This system will degenerate into the traditional GI system without the feedback circle together with the intensity of light source be a constant value. Thus, the output value ${I_{2}(t_i)}$ of the bucket detector is only dependent on the transmissivity $T_i$ of the block box. Meantime, the intensity distribution ${I_{1}{(x,y,t_i)}}$ of the patterned light beam can be understood as the intensity of light source multiplying a mask modulation function ${A_i}$. Therefore, images can be obtained via the correlation process\cite{RN96}, that is \begin{equation}\label{G-traditional} G_{traditional}^{(2)} = < {{I_{1}(x,y,t_i)}{I_{2}}(t_i)} >=< {A_i} {T_i} >, \end{equation} However, in our case, the intensity of light source is not constant, which is modulated by the feedback signal as shown in Equation \ref{I-I-K}. By substituting Equation \ref{I-I-K} into Equation \ref{G-traditional}, one can get naked-eye ghost imaging result via the Equation \ref{G-2}, \begin{equation}\label{G-2} {\hat G^{(2)}} = < {A_i}{I_i} > = < {P_i} > = \int_t^{t + \tau} {{P_i}} dt, \end{equation} where the patterned light $ P_i $ is the result of that the feedback modulated laser beam interacts with the mask and goes through it, which realized the multiplication between $ A_i $ and $ I_i $. And then, this patterned light $ P_i $ is split by a beam splitter, where the transmission part illuminates the object serving as the feedback regulation and the reflected pattern is diffused on the screen. The output light from the screen is observed by a photosensitive component such as human eyes, performing integral imaging process, as shown in Equation \ref{G-2}, where $ \tau $ stands for the vision persistence time and ${\hat G^{(2)}}$ stands for the naked-eye ghost imaging result. Due to the inverse relationship, the negative image can be obtained. \subsubsection{Light modulation mask} One of the challenges with naked-eye GI is overcoming the background introduced by the reference beam, since the image is immersed in the reference light beam. To realize high contrast naked-eye GI, a special pattern-scanning architecture is designed on a low-speed light-modulation mask. Firstly, the object is divided into several blocks. Thus the dimensionality of the image can be reduced. For instance, one can divide the object ($ n\times n $) into $k$ column blocks. Every block is $ n\times (n/k) $ pixels. Next, we use a complete set of low order Hadamard scanning pattern to scan each block row by row as shown in Figure \ref{fig:disk}. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{Disk} \caption{Structure of light modulation mask based on the Hadamard vector.} \label{fig:disk} \end{figure} So, one can get the visibility of each row in block via this imaging method, \begin{equation}\label{contrast} \begin{split} Contras{t_{\hat H\_Block}} &= {{1 + {N_{Block}}} \over {1 + {N_{Block}}\left( {2{N_{Block}} - 5} \right)}},\\ {N_{Block}} &= n/k. \end{split} \end{equation} In order to get high contrast via Hadamard pattern, apart from the sample scanning, it is a suitable choice that one can take ${N_{Block}} = 7$. In addition to high contrast property, this method enables high-resolution imaging with lower-order Hadamard vectors and boosts the imaging speed as well. \subsection{Digital modulation method} Figure \ref{fig:DM} shows the work flowchart for the negative feedback digital modulation system. The output signal $B$ of the bucket detector injects into this comparator to generate a TTL signal, which is fed into laser driver to modulate laser intensity. In detail, when $I> {b/T}$, $ I $ will decrease. When $I < {b/T}$, $ I $ will increase, where $b$ is the reference voltage. Taking the laser relaxation time into account, the laser intensity will approximate to \begin{equation}\label{I-b-K} I \approx {b \over T}. \end{equation} In addition, this imaging system formed by negative feedback has high control precision and stable operation, which can suppress noise apparently. \begin{figure}[H] \centering \includegraphics[width=0.6\linewidth]{DM} \caption{Work flowchart for the negative feedback digital modulation system.} \label{fig:DM} \end{figure} \subsection{Analog modulation method} Experimental apparatus shown in Figure \ref{fig:scheme} can be re-described by a negative feedback loop when an analog modulation scheme is adopted, as shown in Figure \ref{fig:AM}. There are two input signals for analog modulator: one is a constant voltage ($ U $) denoting the measured laser intensity without any loss. The other is the output value $ B $ of bucket detector. The output signal ($ S $) from this analog modulator is the minus result between these two input signals \begin{equation}\label{S} S = I = U - I \times T. \end{equation} This signal is fed into the laser driver to modulate laser intensity. Considering the relaxation time of the imaging system, the laser intensity will be modulated by $\hat T$ as shown in Equation \ref{I-U-K}. \begin{equation}\label{I-U-K} \begin{split} I &= {U \over {\hat T}},\\ {\hat T} &= 1 + T. \end{split} \end{equation} The same effect such as noise suppression can be achieved by this system. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{AM} \caption{Negative feedback loop with an analog modulation.} \label{fig:AM} \end{figure} \subsection{Anti-noise capacity} In this ghost imaging system, the main noise introduced is from the bucket detector, which is exposed to an external environment. \subsubsection{Anti-Noise on Circuits} Firstly, in digital feedback loop, the noise signal from ambient noise light or others will be limit by the comparator of digital modulator. Moreover, if the noise goes through the comparator, the noise signal will be automatic suppression via the negative feedback system. Secondly, as well as the digital feedback loop, the analog feedback circle makes the output play the opposite role to the input of the noise, reducing the error between the system output and the system target. Ultimately, it makes the system tend to be stable. \subsubsection{Anti-noise on imaging algorithm} If the noise has been introduced into the system, the imaging expression will become \begin{equation}\label{G--1} \begin{split} G_I^{(2)} &= A_{N \times M}^T{({b \over {T - \Delta {T_{noise}}}})_{M \times 1}} \\ &= A_{N \times M}^T{({b \over T}{1 \over {1 - {{\Delta {T_{noise}}} \over T}}})_{M \times 1}}. \end{split} \end{equation} Then one can get the Taylor expansion for Equation \ref{G--1} \begin{equation}\label{G--2} \begin{split} G_I^{(2)} = A_{N \times M}^T({b \over T}(1 &+ ({{\Delta {T_{noise}}} \over T}) + {({{\Delta {T_{noise}}} \over T})^2}\\ + {({{\Delta {T_{noise}}} \over T})^3} &+ O{\left( {{{\Delta {T_{noise}}} \over T}} \right)^4}))_{M \times 1}. \end{split} \end{equation} Because $\Delta {T_{noise}}$ is much small than $ T $, so it is easy to come out that \begin{equation}\label{G--3} {({{\Delta {T_{noise}}} \over T})^n} \ll {{\Delta {T_{noise}}} \over T} \ll 1,n \in {{\bf{N}}^*}. \end{equation} From the Equation \ref{G--2} and Equation \ref{G--3}, one can get that the effect of noise is significantly weakened. \section{Results and discussion} Figure \ref{fig:xjtu1} and Figure \ref{fig:xjtu2} show the high contrast imaging results (``X, J, T and U'', respectively) obtained by the digital negative feedback loop and analog negative feedback loop, respectively. Based on our method, the key problem of the image being immersed in the probe light beam is solved. It is worthy to note that they are negative images and videos. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{XJTU-1} \caption{Imaging result under digital negative feedback.} \label{fig:xjtu1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{XJTU-2} \caption{Imaging result under analog negative feedback.} \label{fig:xjtu2} \end{figure} From the imaging system, the modulator can be expressed as $ A \times X $, \begin{equation}\label{K-matrix} {T_{M \times 1}} = A_{N \times M}^T{X_{N \times 1}}. \end{equation} where $ A $ is the mask modulation function and $ X $ denotes the object. Equation \ref{B} can be rewritten via Equation \ref{K-matrix}, \begin{equation}\label{B-matrix} {B_{M \times 1}} = diag({I_{M \times 1}})A_{N \times M}^T{X_{N \times 1}} \end{equation} On the other hand, through the Equation \ref{B}, $ T $ is equal to the bucket signal value when the laser intensity is constant. So, the traditional correlated imaging can get via $T $ and $ A $. \begin{equation}\label{G-2-matrix} {G^{(2)}} = A_{N \times M}^T{T_{M \times 1}} \end{equation} However the value of $ T $ is unknown, and only the laser intensity is changed with $ T $. In addition, from Equation \ref{I-b-K} and Equation \ref{I-U-K} , $ I $ and $ T $ are anti-related. As shown in Figure \ref{fig:scheme}, when one watches the screen, the human eye will automatically integrate the intensity modulated pattern. So, the imaging process can be express as \begin{equation}\label{key} \begin{split} G_{{I_1}}^{(2)} &= A_{N \times M}^T{I_{M \times 1}} = A_{N \times M}^T{({b \over T})_{M \times 1}},\\ G_{{I_2}}^{(2)} &= A_{N \times M}^T{I_{M \times 1}} = A_{N \times M}^T{({U \over {\hat T}})_{M \times 1}}. \end{split} \end{equation} One can find that $ I $ and $ T $ are anti-related, resulting in negative images. From the Equation \ref{I-b-K} and Equation \ref{I-U-K}, one can get the adaptive processing. If the pass speckle is very different from the object, the value of $T$ is very small. Thus the signal coming back is very weak and the bucket value $B$ is very small too. Thus, the system will automatically increase the intensity of the current speckle via the simple feedback system we proposed. Conversely, it will automatically decrease the intensity of the current speckle. As a result, the principal component of the negative image is strengthened. \section{Conclusion} In summary, a naked-eye ghost imaging via photoelectric feedback is realized. The obstacle to realizing high-contrast real-time imaging for moving objects is removed by a special pattern-scanning architecture and feedback system. Meanwhile, high resolution and the boosted imaging speed can be obtained with low pixel illumination from a low-speed rotating light-modulation mask. Two types feedback circuits, digital and analog, are used to modulate the laser intensity, which will bring the advantage of anti-noise. This work opens a new way to utilize GI, which has a potential application to 3D GI visualization, GI virtual reality and so on. \begin{acknowledgments} National Basic Research Program of China (973 Program) (Grant No. 2015CB654602); Key Scientific and Technological Innovation Team of Shaanxi Province (Grant No. 2018TD-024); 111 Project of China (Grant No. B14040).\\ \end{acknowledgments}
{ "timestamp": "2019-04-16T02:09:45", "yymm": "1904", "arxiv_id": "1904.06529", "language": "en", "url": "https://arxiv.org/abs/1904.06529" }
\section{INTRODUCTION} \label{S:intro} The braking indices of pulsars are indicative of the spin-down mechanisms of neutron stars (NSs), which can be related to various aspects of NS physics. Traditional scenarios of a rotating magnetic dipole in vacuo show that pulsars should have braking indices $n=3$ (e.g., \cite{Ostriker:1969}). However, this simple model is inconsistent with the observations of braking indices for all nine young pulsars, of which eight pulsars have $n<3$ (see \cite{Lyne:2015} and references therein) and only one has $n>3$ \cite{Archibald:2016}. To explain the $n<3$ braking indices, several models have been invoked, including accretion of the fallback disc around a NS \cite{Menou:2001}, braking torques due to relativistic particle winds and magnetic dipole radiation (MDR) \cite{Xu:2001}, spin-down caused by quantum vacuum friction and MDR \cite{Coelho:2016}, a decrease in the effective moment of inertia of a NS as its interior normal matter becomes superfluid \cite{Ho:2012a}, and an increase in the surface dipole magnetic field due to either reemergence of the magnetic field buried after birth \cite{Muslimov:1996} or evolution of the crustal magnetic field \cite{Gourgouliatos:2015}. The only young pulsar PSR J1640-4631 with $n>3$ \cite{Archibald:2016} observed hitherto\footnote{Recently, it has been claimed that another young x-ray pulsar PSR J0537-6910 may have $n=7$ as inferred from its complete timing data \cite{Andersson:2017}. However, the result is inconclusive because of frequent glitches of this pulsar.} has attracted great attention and various models have been proposed to elucidate the large braking index, for instance, magnetic dipole spin-down of a pulsar with a plasma-filled magnetosphere \cite{Eksi:2016}, a combination of dipole and wind braking \cite{Tong:2017}, spin-down of a conventional NS (or even a exotic low-mass NS \cite{Chen:2016}) due to MDR and gravitational wave emission (GWE) \cite{de Araujo:2016a,de Araujo:2016c}, classical MDR braking but with dipole field decay involved \cite{Gao:2017}. Theoretically, both GWE and dipole field decay may be inevitable for a NS with a strong magnetic field and an finitely conductive crust. The strong magnetic fields of NSs could deform them into a quadruple ellipsoid (see \cite{Glampedakis:2017} for a recent review), making them promising sources for continuous GW searches using ground-based GW detectors, such as advanced LIGO \cite{Abbott:2009}, Virgo \cite{Acernese:2008}, and the planned Einstein Telescope \cite{Punturo:2010}. Although no GW signals from known pulsars has been detected during the first observing run of advanced LIGO \cite{Abbott:2017}, the magnetically induced GWE could indeed affect the spin evolution of NSs and leave some imprints in their timing data. Moreover, for a deformed NS that is not in the minimum spin energy state, to minimize its spin energy, free-body precession of the star's magnetic axis around the spin axis will unavoidably occur, which could lead to the change of the tilt angle between the two axes. Generally, the tilt angle evolution of a NS with a plasma-filled magnetosphere \cite{Goldreich:1969} is determined by the MDR \cite{Philippov:2014}, the GWE reaction \cite{Cutler:2000}, and damping of the free-body procession due to internal dissipation \cite{Alpar:1988,Cutler:2002}. Among them, the angle evolution result from damping of the free-body precession can be related to a critical parameter called the number of precession cycles, $\xi$ \cite{Jone:1976,Alpar:1988}. Since the damping mechanisms are not clearly understood, only quite rough estimates for $\xi$ have been proposed hitherto. For instance, as a possible damping mechanism, Alpar \& Sauls \cite{Alpar:1988} studied the core-crust coupling due to scattering of electrons off the neutron vortices and obtained $\xi\approx 10^{2-4}$. On the other hand, damping of the stellar free-body precession caused by elastic dissipation in the crust gives a relatively large $\xi\lesssim 10^5$ \cite{Cutler:2002}. However, this parameter is extremely important in discussing the GWE of a NS (e.g., \cite{Stella:2005,Gualtieri:2011}), because $\xi$ could significantly affect the time scale over which the optimal (unfavorable) configuration for GWE can be achieved, provided that the star has a prolate (oblate) shape. It has long been suggested that the dipole field that possibly associated with the crustal field of a NS could decay due to Hall drift and Ohmic dissipation (e.g., \cite{Jones:1988,Goldreich:1992}). The specific time scale for the field decay is still uncertain, though typical time scales of $\sim10^4$ yr (depending on the dipole field strength and the density at the base of the crust) \cite{Cumming:2004,Dall'Osso:2012} and $\sim10^6$ yr (depending on the electrical conductivity of the crust) \cite{Goldreich:1992,JonesPB:2001,Ho:2011} were proposed for Hall drift and Ohmic dissipation, respectively. Furthermore, population synthesis studies of isolated radio pulsars suggested a extremely long decay time scale of $\gtrsim 10^8$ yr if field decay could indeed occur \cite{Mukherjee:1997}. In this paper, we explain the braking index of PSR J1640-4631 based on a model involving both GWE and dipole field decay, which are natural consequences with the presence of strong magnetic fields of a NS. We propose a new approach of estimating $\xi$ by using the timing data of PSR J1640-4631 and the magnetic field decay theory. We suggest that once the tilt angle of this pulsar is measured, we could not only put constraints on the highly uncertain parameter $\xi$ but also possibly know about its internal magnetic field configuration. Interestingly, the value of $\xi$ would be larger than previous results unless a tiny tilt angle ($\lesssim 5^\circ$) is observed. The paper is organized as follows. The evolutionary model for PSR J1640-4631 is presented in Sec. \ref{Sec II}. We introduce the theory of magnetic field decay in Sec. \ref{Sec III}. Our results are given in Sec. \ref{Sec IV}. Finally, a conclusion and some brief discussions about possible physical explanations of a large $\xi$ and its influence on the GWEs from newborn magnetars are provided in Sec. \ref{Sec V}. \section{EVOLUTION OF PSR J1640-4631}\label{Sec II} Using the \textit{NuSTAR} X-ray observatory, Gotthelf \textit{et al}. \cite{Gotthelf:2014} discovered the pulsar PSR J1640-4631, whose period and first period derivative are $P=206$ ms and $\dot{P}=9.758\times10^{-13}$ s/s, respectively. Recently, by performing a phase-coherent timing analysis of the x-ray timing data of PSR J1640-4631 observed with \textit{NuSTAR}, Archibald \textit{et al}. \cite{Archibald:2016} obtained its second period derivative and braking index $n=3.15(3)$. For a pulsar with a corotating plasma magnetosphere \cite{Goldreich:1969} that spins down mainly due to MDR and magnetic deformation-induced GWE, its angular frequency evolution has the following form \cite{Cutler:2000,Spitkovsky:2006}: \begin{eqnarray} \dot{\omega}=-\frac{2G\epsilon_{\rm B}^2I\omega^5}{5c^5} \sin^2\chi(1+15\sin^2\chi)-\frac{kB_{\rm d}^2R^6\omega^3}{Ic^3}(1+{\rm sin}^2\chi) \label{dwdt}, \end{eqnarray} where $\epsilon_{\rm B}$ is the ellipticity of magnetic deformation, $I$ the moment of inertia, $\chi$ the tilt angle, $k$ the coefficient related to MDR, $B_{\rm d}$ the surface dipole magnetic field at the magnetic pole, and $R$ the stellar radius. Hereinafter, we adopt $k=1/6$, and take canonical values for the parameters of the presumed $1.4M_\odot$ NS as $I=10^{45}$ g ${\rm cm}^2$ and $R=10$ km.\footnote{We note that the value of $k$ is still in debate (see Refs. \cite{Spitkovsky:2006,Philippov:2014,Contopoulos:2014,Philippov:2015}). However, adopting different values for $k$(=1/4) and $R$(=12 km) could affect the value of $\xi$ by at most a factor of two.} We define a ratio $\eta=\dot{\omega}_{\rm MDR}/\dot{\omega}_{\rm GWE}=5kc^2B_{\rm d}^2R^6(1+\sin^2\chi)/[2G\epsilon_{\rm B}^2I^2\omega^2(1+15\sin^2\chi)\sin^2\chi]$, where $\dot{\omega}_{\rm MDR}$ and $\dot{\omega}_{\rm GWE}$ are the MDR-induced and GWE-induced spin-down rate, respectively. Though the GWE braking becomes maximal when $\chi=\pi/2$ is taken, one still has $\eta\gg1$ for $\left|\epsilon_{\rm B}\right|\ll 8.69\times10^{-3}(B_{\rm d}/10^{13}~{\rm G})$, as $\omega$ is known for PSR J1640-4631. We will show that no matter whether the internal fields of this NS are poloidal-dominated (PD) or toroidal-dominated (TD), the theoretically estimated $\epsilon_{\rm B}$ is far beneath this limit. Previous studies have shown that the NS equation of state, the magnetic energy, the internal magnetic configuration, and the presence of proton superconductivity in the core (which may change the interior magnetic field distribution) could all affect the magnetic deformation of a NS (e.g., Refs. \cite{Haskell:2008,Dall'Osso:2009}). Lots of theoretical calculations have been made to obtain the ellipticity (see, e.g., Refs. \cite{Bonazzola:1996,Cutler:2002,Haskell:2008,Dall'Osso:2009,Ciolfi:2009,Ciolfi:2010,Gualtieri:2011,Mastrano:2011,Mastrano:2012,Lander:2013,Lasky:2013,Mastrano:2015,Dall'Osso:2015}). For a young NS like PSR J1640-4631, its interior temperature is probably lower than the critical temperature for proton superconductivity \cite{Page:2014}, even if only modified Urca cooling occurs \cite{Page:2006}. Hence, to estimate $\epsilon_{\rm B}$ of PSR J1640-4631, the effect of proton superconductivity should be involved, as that done in Ref. \cite{Lander:2013}. After considering type-II proton superconductivity in the interior of a NS, Lander \cite{Lander:2013} self-consistently obtained an equilibrium configuration that consists of a mixed poloidal-toroidal field and derived the corresponding magnetic ellipticity \begin{eqnarray} \epsilon_{\rm B}=3.4\times10^{-7}\left({B_{\rm d}\over10^{13}~{\rm G}}\right)\left({H_{\rm c1}(0)\over10^{16}~{\rm G}}\right) \label{epsil1}, \end{eqnarray} where the central critical field strength is taken to be $H_{\rm c1}(0)=10^{16}$ G \cite{Lander:2013}. In this field configuration, since the dominant part is the poloidal component, the NS has a oblate shape ($\epsilon_{\rm B}>0$). This configuration is partially akin to the twisted-torus configuration found in numerical simulations \cite{Braithwaite:2006}. The main difference is that in the latter configuration, the toroidal field may be dominant \cite{Braithwaite:2009}, the NS possibly has a prolate shape ($\epsilon_{\rm B}<0$). With type-II proton superconductivity involved, and based on the twisted-torus configuration, a calculation of $\epsilon_{\rm B}$ is presented in Ref. \cite{Mastrano:2012}. However, the results are very rough and only upper limits are given for $\epsilon_{\rm B}$ because the superconducting stellar interior is assumed to have a homogeneous magnetic permeability, which is in fact physically implausible. Since there is no self-consistent calculations for the ellipticity of a superconducting NS that has a TD twisted-torus field configuration inside currently, we simply adopt $\epsilon_{\rm B}$ derived for the pure toroidal configuration as a substitution, which takes the form \cite{Akgun:2008} \begin{eqnarray} \epsilon_{\rm B}\approx-10^{-8}\left({H\over10^{15}~{\rm G}}\right)\left({\bar{B}_{\rm in}\over10^{13}~{\rm G}}\right) \label{epsil2}, \end{eqnarray} where $H\approx10^{15}$ G is the critical field strength and $\bar{B}_{\rm in}$ the volume-averaged strength of the internal toroidal field. It is generally hard to determine $\bar{B}_{\rm in}$ of a NS. Fortunately, the observed positive correlation between the surface temperatures and dipole magnetic fields of isolated NSs (with $B_{\rm d}\gtrsim 10^{13}$ G) indicates that strong toroidal fields with volume-averaged strengths of $\sim10B_{\rm d}$ possibly exist in NS crusts \cite{Pons:2007}. We thus assume that the strengths of the crustal toroidal fields are representative of $\bar{B}_{\rm in}$ of the whole stars, that is, $\bar{B}_{\rm in}\simeq10B_{\rm d}$. Internal fields that are one order of magnitude (or more) higher than dipole fields may indeed be present in young pulsars (see Ref. \cite{Glampedakis:2012}). It should be noted that the internal fields which determine the ellipticity may also decrease as the star evolves. Here we assume that the relation between the internal fields and $B_{\rm d}$ remains unchanged and the expression for $\epsilon_{\rm B}$ given by Eq. (\ref{epsil1}) or (\ref{epsil2}) still holds with the decay of $B_{\rm d}$, though a global long-term numerical simulation is needed to reveal how internal fields and $\epsilon_{\rm B}$ vary with time. Interestingly, a time-dependent $\epsilon_{\rm B}$, as also considered in Ref. \cite{de Araujo:2016c}, can hardly change our results in comparison with the case of a time-independent $\epsilon_{\rm B}$. The reason is that adopting a time-dependent $\epsilon_{\rm B}$ results in a factor $(1+1/\eta)\simeq 1$ just before the term $\dot{B}_{\rm d}/B_{\rm d}$ in Eq. (\ref{bi2}), which is 1 for the case of a time-independent $\epsilon_{\rm B}$. From Eqs. (\ref{epsil1}) and (\ref{epsil2}), we can see that these estimated $\epsilon_{\rm B}$ are consistent with the requirement of $\eta\gg1$. The GWE braking can therefore be neglected due to its little effect on the spin-down of PSR J1640-4631. However, the GWE could still affect the pulsar's tilt angle evolution. The tilt angle evolution of a magnetically deformed NS with a plasma magnetosphere is given by \cite{Cutler:2000,Jones:2001,Dall'Osso:2009,Philippov:2014}: \begin{eqnarray} \dot{\chi}=\left\{ \begin{aligned} -\frac{2G}{5c^5}I\epsilon_{\rm B}^2\omega^4\sin\chi&\cos\chi(15\sin^2\chi+1)-{\epsilon_{\rm B}\over\xi P}{\rm tan}\chi\\&-\frac{kB_{\rm d}^2R^6\omega^2}{Ic^3}\sin\chi\cos\chi,~{\rm for}~\epsilon_{\rm B}>0 \\ -\frac{2G}{5c^5}I\epsilon_{\rm B}^2\omega^4\sin\chi&\cos\chi(15\sin^2\chi+1)-{\epsilon_{\rm B}\over\xi P}{\rm cot}\chi\\&-\frac{kB_{\rm d}^2R^6\omega^2}{Ic^3}\sin\chi\cos\chi,~{\rm for}~\epsilon_{\rm B}<0. \end{aligned} \right. \label{dchi} \end{eqnarray} The first and third terms of the above formula represent the alignment effects caused by the GWE and MDR, respectively. The second term represents the angular evolution from damping of the stellar free-body procession due to internal dissipation. Depending on the shape of a NS (or the sign of $\epsilon_{\rm B}$), this effect could either decrease or increase $\chi$. Actually, Eq. (\ref{dchi}) stands for the main difference as compared to previous models \cite{Chen:2016,de Araujo:2016a,de Araujo:2016c}, in which these mechanisms for tilt angle evolution were not considered. By taking both the field decay and tilt angle evolution into account, the braking index reads \begin{eqnarray} n = &3-{2P\over\dot{P}}\left\{{\dot{B}_{\rm d}\over B_{\rm d}}+\dot{\chi}\sin\chi\cos\chi\left[\frac{1}{1+\sin^2\chi} +\right.\right.\nonumber\\ &\phantom{=\;\;}\left.\left.\frac{1+30\sin^2\chi}{\eta\sin^2\chi\left(1+15\sin^2\chi\right)} \right]\right\} \label{bi2}, \end{eqnarray} where $\dot{B}_{\rm d}$ is the decay rate of $B_{\rm d}$. We will see below Eq. (\ref{bi2}) is a critically link that relates $\xi$ in Eq. (\ref{dchi}) to the timing data of PSR J1640-4631 and the field decay time scale $\tau_{\rm D}=-B_{\rm d}/\dot{B}_{\rm d}$ determined by the field decay theory. \section{THE THEORY OF MAGNETIC FIELD DECAY}\label{Sec III} The decay rate of $B_{\rm d}$ is determined by the specific field decay mechanisms, which are generally considered to be Hall drift and Ohmic dissipation if the dipole field has a crustal origin. However, the mathematical form of field decay is still not clearly known. For simplicity, we consider two typical decay forms that introduce the least parameters. The first one is the exponential form \cite{Pons:2007,Dall'Osso:2012} \begin{eqnarray} {dB_{\rm d}\over dt}=-{B_{\rm d}\over \tau_{\rm D}} \label{dBdt1}, \end{eqnarray} where $\tau_{\rm D}$ is the dipole field decay time scale. The second one is the nonlinear form \cite{Dall'Osso:2012,Ho:2012b,Gao:2017} \begin{eqnarray} {dB_{\rm d}\over dt}=-{B_{\rm d}\over {\tau_{\rm D}+t}} \label{dBdt2}, \end{eqnarray} where $t$ is the actual age of the pulsar. Generally, $\tau_{\rm D}$ may be determined by both Hall drift and Ohmic dissipation in the crust as $1/\tau_{\rm D}=1/\tau_{\rm Hall}+1/\tau_{\rm Ohmic}$ (see, e.g., \cite{Gao:2017}), where $\tau_{\rm Hall}$ and $\tau_{\rm Ohmic}$ are Hall drift and Ohmic dissipation time scales, respectively. It should also be noted that Hall drift itself is a non-dissipative process, however, could substantially accelerate the field decay by changing the large scale magnetic field into small scale components, which would decay rapidly due to Ohmic dissipation \cite{Goldreich:1992,Muslimov:1994}. In this case, the field decay time scale may be set by the Hall time scale in the crust as $\tau_{\rm D}=\tau_{\rm Hall}\simeq1.2\times10^4(10^{15}~{\rm G}/B_{\rm d})~{\rm yr}$ \cite{Cumming:2004,Dall'Osso:2012}. Furthermore, if Ohmic dissipation dominates the crustal field decay process, as indicated by the positive correlation between the surface temperatures and dipole fields of isolated NSs \cite{Pons:2007}, the dipole fields which are assumed to be proportional to the crustal fields may decay on the same time scale $\tau_{\rm D}=\tau_{\rm Ohmic}\simeq5\times10^5$ or $10^6$ yr as the latter \cite{Pons:2007}. Lastly, numerical modeling of the coupled magnetic field evolution in the crust and the core of a NS shows that $B_{\rm d}$ could decay over a time scale $\tau_{\rm D}\simeq 150$ Myr due to the combined effects of flux tube drift in the core and Ohmic dissipation in the crust \cite{Bransgrove:2018,Zhu:2018}. This may represent the longest field decay time scale predicted theoretically, and it is also consistent with the results of pulsar population synthesis \cite{Mukherjee:1997}. In Fig. \ref{Fig1} we show $\tau_{\rm D}$ as a function of $\chi$. The latter is related to $B_{\rm d}$ via Eq. (\ref{dwdt}) by neglecting the term of GWE. From the timing data of PSR J1640-4631, we obtain $B_{\rm d}\sim2\times10^{13}$ G. Thus $\tau_{\rm Hall}(\chi)$ (black solid line) is approximately equal to $\tau_{\rm Ohmic}\simeq5\times10^5$ yr (black dashed line). If $\tau_{\rm D}(\chi)$ follows the form $\tau_{\rm D}(\chi)=1/[1/\tau_{\rm Hall}(\chi)+1/\tau_{\rm Ohmic}]$, its minimum value at $\chi$ can be obtained by taking $\tau_{\rm Ohmic}=5\times10^5$ yr, as shown by the black dash-dot-dotted line (also the lower boundary of the blank region) in Fig. \ref{Fig1}. A larger $\tau_{\rm Ohmic}$ can shift this boundary upwards, but should not surpass $\tau_{\rm Hall}(\chi)$. The maximum value of $\tau_{\rm D}(\chi)$ at $\chi$ could be determined by $\tau_{\rm Ohmic}$, which may be $5\times10^5$, $10^6$ (black dotted line), or $1.5\times10^8$ yr (black dash-dotted line) if Ohmic dissipation dominates the field decay.\footnote{Here we attribute $\tau_{\rm D}(\chi)\simeq150$ Myr to the effect of crustal Ohmic dissipation but keep in mind that flux tube drift in the core region also plays an important role.} The upper boundary of the blank region in Fig. \ref{Fig1} corresponds to $\tau_{\rm D}(\chi)=1.5\times10^8$ yr, above which should be excluded following the field decay theory. From Eqs. (\ref{dBdt1}) and (\ref{dBdt2}), we have $\tau_{\rm D}=-B_{\rm d}/\dot{B}_{\rm d}$ and $\tau_{\rm D}=-B_{\rm d}/\dot{B}_{\rm d}-t$, respectively. The actual age $t$ of PSR J1640-4631 remains unconstrained from observations currently, though an estimate of $t\sim3000$ yr (close to its characteristic age $\tau_{\rm c}=3350$ yr \cite{Gotthelf:2014}) was proposed on basis of the dipole field decay \cite{Gao:2017}. Assuming $t\simeq\tau_{\rm c}$, from Fig. {\ref{Fig1}} we can see that $t$ is far below the lower boundary of $\tau_{\rm D}(\chi)$. Therefore, hereinafter we can safely neglect the term $t$ and determine the decay time scale via $\tau_{\rm D}=-B_{\rm d}/\dot{B}_{\rm d}$. \section{RESULTS}\label{Sec IV} By substituting the observed $P$, $\dot{P}$, $n=3.15$, and Eq. (\ref{dchi}) into Eq. (\ref{bi2}), and taking $\xi$ as a free parameter, one can solve for $\tau_{\rm D}=-B_{\rm d}/\dot{B}_{\rm d}$ versus $\chi$. The evolution curves $\tau_{\rm D}(\chi)$ for different $\xi$ are shown by the colored curves in Fig. \ref{Fig1}. Since the evolution of $\chi$ depends on the shape of the NS, in Fig. \ref{Fig1}, we first show the results for the PD case with $\epsilon_{\rm B}$ given by Eq. (\ref{epsil1}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig1.eps}} \caption{Dipole field decay time scale $\tau_{\rm D}$ versus tilt angle $\chi$. This figure shows a comparison between $\tau_{\rm D}(\chi)$ derived using timing data (colored lines) and that obtained based on the magnetic field decay theory (black lines). The NS is assumed to have PD internal fields. See the text for details.} \label{Fig1} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig2.eps}} \caption{The same as Fig. \ref{Fig1}, however, the NS is assumed to have TD internal fields. See the text for details.} \label{Fig2} \end{figure} The constraint on $\xi$ is set by the fact that at a certain $\chi$, $\tau_{\rm D}(\chi)$ derived from timing data of PSR J1640-4631 should be equal to $\tau_{\rm D}(\chi)$ obtained based on the field decay theory. That is, it requires that the colored curve should at least intersect with one of the black curves, as presented in Fig. \ref{Fig1}. If the internal fields of this pulsar are PD, for the number of precession cycles in a wide range of $10^4\lesssim \xi\lesssim 10^8$, each of the colored curves has at least one intersection with the black lines. The interactions are distributed within $2^\circ\lesssim\chi\lesssim18^\circ$ and $57^\circ\lesssim\chi\lesssim90^\circ$. Specifically, for $\xi\lesssim 10^5$, all the intersections are within $\chi\lesssim 5^\circ$. For $5\times10^6\lesssim \xi\lesssim 10^8$, $\tau_{\rm D}(\chi)$ derived via Eq. (\ref{bi2}) splits into two branches, of which the left one has interactions at $12^\circ\lesssim\chi\lesssim18^\circ$, and the right one has interaction(s) at $57^\circ\lesssim\chi\lesssim82^\circ$. Even if $\xi\gtrsim10^9$ (which might be unphysical) is taken, no interactions could be found for intermediate angles $18^\circ\lesssim\chi\lesssim57^\circ$. We also investigate another possibility that this NS has TD internal fields with $\epsilon_{\rm B}$ given by Eq. (\ref{epsil2}). The results are presented in Fig. \ref{Fig2}, which shows that in order to have at least one intersection between the curve $\tau_{\rm D}(\chi)$ obtained based on the timing data and the black dash-dot-dotted line, the lower limit for the number of precession cycles can be set as $\xi\gtrsim 1.25\times10^6$ (the orange curve). All the intersections are distributed within $14^\circ\lesssim \chi\lesssim 63^\circ$ for $1.25\times10^6\lesssim\xi\lesssim10^8$. For the tilt angle in the ranges $\chi\lesssim 14^\circ$ and $\chi\gtrsim 63^\circ$, there is no intersections even though an (unphysically) large $\xi\gtrsim10^9$ is adopted. The same as in the PD case, $\tau_{\rm D}(\chi)$ derived from the timing data also shows a bifurcation for $5\times10^6\lesssim \xi\lesssim 10^8$. Therefore, we suggest that future observations of the tilt angle of PSR J1640-4631 would probably help to probe its internal magnetic field configuration and put constraints on the number of precession cycles. For instance, a small measured angle $\chi\lesssim 14^\circ$ possibly supports a PD internal field configuration because no intersections are found for $\chi$ in this range in the TD case. Moreover, a small value for the number of precession cycles $\xi\lesssim 10^5$ as suggested in previous work \cite{Alpar:1988,Cutler:2002,Jones:2001,Gualtieri:2011} could be confirmed only if a tiny angle $\chi\lesssim 5^\circ$ is observed. Beyond this angle, $\xi$ would be larger than previous estimates no matter whether the internal fields are PD or TD. With some more calculations we find that as long as an angle $\chi\gtrsim12^\circ$ is observed,\footnote{This is the largest lower limit required to satisfy $\xi\gtrsim 10^6$, which is derived for the PD case and by taking $\tau_{\rm D}(\chi)=150$ Myr.} one would have $\xi\gtrsim 10^6$, irrespective of the internal field configuration. A large angle $\chi\gtrsim 63^\circ$ may also indicates the PD scenario, however, the required $\xi$ is in the range $10^6\lesssim\xi\lesssim10^8$, at least $\sim10-\-10^3$ larger than previous results. In contrast, an intermediate angle $18^\circ\lesssim\chi\lesssim57^\circ$ seems to favor a TD internal field configuration, and a large $\xi$ whose lower limit is $1.25\times10^6$. Only for the measured angle in two small ranges $14^\circ\lesssim\chi\lesssim18^\circ$ and $57^\circ\lesssim\chi\lesssim63^\circ$, we could not deduce whether the poloidal or the toroidal field is dominant in the NS interior. \section{CONCLUSION AND DISCUSSIONS}\label{Sec V} Based on the timing data of PSR J1640-4631 and the magnetic field decay theory, we propose a new method of estimating a vital but presently highly unknown parameter called the number of precession cycles, $\xi$. In the modeling, we considered different internal magnetic field configurations, field decay formulas, and field decay time scales. We conclude that if the tilt angle $\chi$ of PSR J1640-4631 could be measured through polarization observation using future x-ray telescopes (e.g., eXTP \cite{Zhang:2016}), we may get quite valuable information about $\xi$ and the internal magnetic fields of this pulsar. Most importantly, irrespective of the internal field configuration, as long as the angle is observed to be $\chi\gtrsim5^\circ$, $\xi$ should be constrained to be larger than previous results \cite{Alpar:1988,Cutler:2002,Jones:2001,Gualtieri:2011}. As a conservative estimate, a measured angle $\chi\gtrsim12^\circ$ would indicate $\xi\gtrsim 10^6$, which is at least ten times larger than that suggested previously. Physically, a large $\xi$ indicates that some rather weak damping mechanisms are responsible for the dissipation of the precessional energy. In the crust, if phonon excitations govern the interactions between vortices and lattices, the mutual friction parameter, whose reciprocal is approximately equal to $\xi$, could be as large as ${\mathcal{B}}\approx10^{-8}$ (e.g., \cite{Haskell:2017,Haskell:2018}). Therefore, an inferred large $\xi\approx10^8$ may suggest that most of the precessional energy is dissipated in the crust due to vortex-lattice interaction controlled by phonon excitations. On the other hand, in the core some (\textit{unknown}) weak damping mechanisms rather than electron-vortex interaction may be dominant, as recently found in \cite{Haskell:2018} that in the core ${\mathcal{B}}\sim10^{-7}-10^{-6}$ is required to interpret the rising processes of three large Crab glitches. If $\xi$ is constrained to be large in the future, it would greatly expedite our understanding of complex interactions in NSs. Furthermore, a large $\xi$ means a long time scale for a prolate NS (e.g., newborn magnetars) to achieve the orthogonal configuration \cite{Stella:2005} provided that $\chi$ could not rapidly increase during very early period \cite{Dall'Osso:2009}. Thus, if newborn magnetars have a large $\xi$, their GWEs may be weak and not easy to be detected. Finally, though we only performed a case study for PSR J1640-4631, we should stress that our new method of estimating $\xi$ also applies to other eight pulsars with a measured braking index. The derived constraints on $\xi$ for these pulsars may be different from that for PSR J1640-4631. This is reasonable because for different pulsars the dominant interior interactions and the internal magnetic field configurations are possibly various. A detailed analysis for other pulsars will be presented in a subsequent paper. \acknowledgements We thank the anonymous referees, W. C. G. Ho, and D. I. Jones for helpful comments and suggestions. Quan Cheng acknowledges funding support by China Postdoctoral Science Foundation under grant No. 2018M632907. This work is also supported by the National Natural Science Foundation of China (Grants No. 11773011, No. 11373036, No. 11133002, No. 11673008, and No. 11622326), the National Program on Key Research and Development Project (Grants No. 2016YFA0400802, and No. 2016YFA0400803), and the Key Research Program of Frontier Sciences, CAS (Grant No. QYZDY-SSW-SLH008).
{ "timestamp": "2019-04-16T02:11:42", "yymm": "1904", "arxiv_id": "1904.06570", "language": "en", "url": "https://arxiv.org/abs/1904.06570" }
\section{Introduction} \label{sec:introduction} \PARstart{T}{he} momentum and growth of large-scale sensor networks have been increasing over the recent years. The rising popularity of such networks is due to the fact that they can be used in numerous and diverse event monitoring applications including traffic, air and water quality, e-health, environmental monitoring (wildlife, forest fires, storms, etc.), and many other applications. Such networks are expected to operate autonomously and for a long period of time. However, in a large scale sensor networks, the high volume of redundant data being communicated through the network increases collision, causes data loss, and most importantly it costs sensor nodes a large amount of scarce energy resources. Therefore, due to severe energy, computational and bandwidth constraints, a sound body of literature has centered on optimizing the efficiency of both the sensing and transmitting activities in order to maximize the lifetime of the network. One of the most commonly used approaches to tackle this problem is the sampling rate adaptation \cite{eSampling,Asampling2,TIIMakhoul,AdhocMak}. A sampling rate is a rate at which a new sample is taken from a continuous signal provided by the sensor board. This rate can be adapted according to the input acquired from the monitoring area. If no significant change is noticed for a certain period of time, the sampling rate could be reduced for the upcoming period, and in contrast, if an event is detected, the sampling rate is increased. This sampling rate adaptation is based on event detection \cite{eSampling,ASTR}. Another sampling rate adaption technique takes into consideration the temporal and spatial correlation among the reported data \cite{Asampling2}, and limits the sampling rate of the sensors that show high correlation with other neighboring ones, and maximizes the sampling rate of those showing a little or no correlation at all. Both approaches aim to reduce the amount of redundant data being transferred through the network. Other data reduction approaches focus solely on reducing the number of transmissions while maintaining a fixed sampling rate \cite{HLMS,LMS,OSSLMS,TR,CP1}. The most popular of them all is the dual prediction scheme. A prediction model capable of forecasting future values is trained and shared between the source and the destination, thus enabling the source sensor node to transmit only the samples that do not match the predicted value. Some approaches also combine both adaptive sampling and transmission reduction into a single mechanism \cite{DPCAS} aiming to minimize further energy consumption. In this paper, we propose a spatial-temporal Correlation based Approach for Sampling and Transmission rate Adaptation (STCSTA) in cluster-based sensor networks. The sensor nodes do not need to run any algorithm. The cluster head is responsible for collecting data from its member sensor nodes, computing a correlation function in order to measure the correlation degree among these nodes. Finally, the sensors that show high correlation will be asked to reduce their sampling rate and the ones showing low correlation will be asked to increase it. Moreover, in order to ensure the integrity of the data, a reconstruction algorithm deployed on the Sink station. The latter is used to reconstruct the \enquote{non-sampled} measurements by exploiting the temporal and spatial correlation among the reported data. We compare our approach to a Data Prediction with Cubic Adaptive Sampling (DPCAS) and to an exponential Double Smoothing-based Adaptive Sampling (EDSAS) using real sensor data. The latter and the former combines both adaptive sampling and transmission reduction into a single mechanism, allowing us to compare the efficiency of our proposal with two very effective approaches in terms of reducing radio communication. The rest of the paper is organized as follows: In section~\ref{RW}, the work related to energy efficient data reduction in a wireless sensor network is presented. In section~\ref{SandEmodel} the system model is briefly explained and the energy model to calculate the energy consumption is illustrated. A detailed explanation of the proposed approach is provided in section \ref{STCSTA}, while experimental results are discussed in Section~\ref{ER}. This paper ends with a conclusion section, in which the contribution is summarized and intended future work is outlined. \section{Related Work} \label{RW} Resource management in sensor networks is a widely discussed topic among researchers. Subsequently, there have been numerous studies regarding this topic. In this section, we present and discuss the different approaches used to tackle this issue. Compression~\cite{Compression1,compression3,JA1,JA2} and aggregation~\cite{Agg1, Agg1Mak, Agg2Mak} are two techniques aiming to reduce the amount of data routed through the network \cite{CP2}. The former focus on compressing the data before transmission to the upper node in the network hierarchy and the latter filters and clean the data by removing redundant information before routing these data to the Sink station. Several data compression and aggregation techniques have been proposed in the literature. The authors in~\cite{Compression1} proposed a compression technique for sensor networks organized in a cluster topology. The approach called Cluster-Based Compressive Sensing Data Collection (CCS) compresses data on the cluster head level by generating Compressive Sensing (CS) measurements based on block diagonal matrices created from the raw data received from neighboring sensors. Moreover, the compressed CS measurements are finally reconstructed at the base station (Sink). In~\cite{compression3} the authors proposed a compression scheme called Compressive Data Collection (CDC) for Wireless Sensor Networks, it exploits the spatial-temporal correlation among sensory data to perform compression. The scheme consists of two layers, the opportunistic routing with compression and the nonuniform random projection based estimation for reconstruction. The authors in~\cite{Agg2} proposed a data aggregation technique called the Prefix-Frequency Filtering (PFF). This approach mainly consists of two aggregation layers, the first one is on the sensor level, and the second one is on the cluster head or the aggregator. On both layers, redundant measurements are filtered using the Jaccard similarity that measures the correlation among collected measurements. In~\cite{Agg1} a Dynamical Message List Based Data Aggregation (DMLDA) technique is presented, it is based on a special data structure called dynamical list. The latter stores the history of received measurements, that are then used to filter any duplicates. One of the most energy consuming activities in WSN beside transmission and processing is sampling, therefore several studies have been conducted on how to reduce the amount of sampled data through a technique known as \enquote{adaptive sampling}, where a sensor can adapt its sampling rate according to the change in the input environment. The authors in~\cite{eSampling} proposed and event-sensitive adaptive sampling and low-cost monitoring (e-Sampling) scheme, where each sensor has short and recurrent bursts of high sampling rate in addition to a low sampling rate. Depending on the analysis of the frequency content of the signal, each sensor can autonomously switch between the two sampling speed. The authors in~\cite{Asampling2} presents a decentralized temporal correlation based adaptive sampling approach, enabling each sensor to decide its own sampling rate while controlling the size of the sampling interval by limiting the interval size to an automatically calculated \enquote{MaximumSkipSamplesLimit (MSSL)} value. The overwhelming majority of studies agree on the fact that radio transmission is the most consuming activity in WSN~\cite{EnergyModel,En2,En3}. Accordingly, numerous studies focused on developing techniques to limit the number of radio transmissions. Most of these techniques are based on the concept of data prediction. The idea is to build on the Sink a prediction model using previously collected readings, that is capable of forecasting future measurements. Enabling the sensor node to transmit a reading only when the prediction does not respect the error tolerance predefined the user. The authors in~\cite{HLMS} proposed a Hierarchical Least Mean Squares (HLMS) adaptive filter as a prediction model, which is one of the many adaptive filter based approaches \cite{RLS,LMS,OSSLMS}. This filter consists of multiple layers of regular Least Mean Square (LMS) filters, each layer takes feedback from the previous layer in the hierarchy aiming to minimize the prediction error of the model. Another technique called Derivative Based Prediction (DBP) was introduced in \cite{DBP}, it is less complex than the adaptive-filter based methods. The prediction model is simply a straight line that interpolates a fixed window of data of size $m$ using the first and last $l$ values in the window. In \cite{DPCAS} the authors proposed an approach that combines an adaptive sampling method that is based on the TCP CUBIC congestion protocol, with a transmission reduction method that is based on an exponential predictive mode. The complete data set including the \enquote{non-sampled} and \enquote{non transmitted} measurements are then reproduced on the sink by interpolating the received measurements. This approach was inspired by both \cite{ASTCP}, and \cite{EDSAS}. The latter uses an exponential Double Smoothing-based Adaptive Sampling (EDSAS) technique, that adapts the sampling rate of a sensor based on the accuracy of a prediction model. As long as this model is producing good predictions the sampling rate is kept low. It is increased, however, when the predictions surpass a predefined error threshold. The former operates in similar fashion, more specifically it adopted the TCP congestion control to adapt the sampling rate of the sensor node. Thus the approach is called Adaptive sampling TCP (ASTCP). Both compression and aggregation are effective in term of reducing the data load in the network, however, their performance is limited and cannot reach the efficiency of techniques such as adaptive sampling and transmission reduction. Therefore, compression and aggregation are considered to be as a complementary layer that can be added to adaptive sampling and transmission reduction to further increase their efficiency. Despite being very effective in reducing the amount of sampled and transmitted data, adaptive sampling and transmission reduction techniques can still consume a substantial amount of energy. This is proportionally related to the complexity of the algorithms that are required to be implemented on the sensor level. The CPU running complex algorithms can consume more energy than the sampling activity~\cite{EnergyModel}, which renders the adaptive sampling technique obsolete in case the implemented algorithm requires a large number of CPU cycles. In order to schedule the sampling intervals of sensor nodes and reduce energy transmission, some approaches rely on the spatial-temporal correlation between sensor nodes deployed in the monitoring area~\cite{ST1, ST2, ST3, ST4, ST5}. The Authors in \cite{ST1} proposed an Efficient Data Collection Aware of spatial-temporal Correlation (EAST). In the latter, the sink subdivides the event area into spatially correlated cells of the same size, then, in each cell, the node having the highest residual energy is elected as a representative node. Only the latter transmits data to the sink while also applying a temporal correlation suppression method on its collected data. Finally, at each time instance, the representative node is re-elected according to the same previous rule. The main drawback of this approach is the size of the cell representing an area of spatially correlated nodes is static, and it is not calculated according to the real level of correlation. Moreover, the representative node is chosen according to residual energy rather than its correlation with other nodes in the cell. Therefore, the term \enquote{representative} is not necessarily true. In \cite{ST2} the authors proposed a sleeping schedule algorithm that aims to minimize the total spatial-temporal coverage redundancy among neighboring nodes while maximizing coverage. Each sensor node compares itself with neighboring ones using a weight criteria and it locally optimizes its scheduling according to its coverage redundancy. This method requires constant message exchange between sensor nodes in order to keep track of the changing weight of each one of them, which can produce overhead. The authors in \cite{ST3} proposed a spatial-temporal correlation model that aims to extend the network lifetime by scheduling a sleeping period for sensors showing high similarities with other ones belonging to the same cluster. The similarity is measured by computing the Euclidean Distance, Cosine Similarity and Pearson Product-Moment Coefficient (PPMC). If the result of one of the three indicates a high similarity, the sensor node is set to sleep for half of the period time (1 period = N samples). The first problem with such an approach is if a sensor X shows a similarity with a sensor Y, the opposite is also true (sensor Y will show similarity with sensor X), therefore, according to this approach, both sensors will be set to sleep. By doing so correlated sensors will miss simultaneously instead of compensating for one another by keeping one of them awake. The second problem is that the sleeping duration is static instead of being computed in a dynamic way according to the level of correlation. Motivate by the problems related to the aforementioned approaches, we present in this paper a spatial-temporal Correlation approach for Sampling and Transmission rate Adaptation (STCSTA) in cluster-based sensor networks. Our approach does not require any algorithm to be implemented on the sensor level, the only task performed by sensors are uniquely sampling and transmission. All the work is done on the Cluster-Head (CH) level, where at the end of each round (duration predefined by the user), the CH runs an algorithm that finds the spatial correlation among the data reported by the sensors belonging to the same cluster. Then, it transmits to one of them its new sampling rate for the next round according to its level of correlation with other neighboring sensors in the cluster. The sampling rate scheduling respects a strict protocol that keeps the sampling rate of the sensors showing high correlation with a large number of nodes at an optimal maximum level. Moreover, the protocol prevents highly correlated sensors from missing simultaneously, allowing one to compensate for another. in addition to sampling rate scheduling, and in order to ensure the integrity of the data, a reconstruction algorithm is deployed on the Sink. This algorithm can identify the time stamps where data has not been received due to a reduction in the sampling rate of a specific sensor, and reconstruct them using the spatial-temporal relations among the collection of data reported by the sensors. \section{System and Energy Model} \label{SandEmodel} \subsection{System model} We consider a set S of N sensor nodes and C cluster heads deployed over a specific monitoring area at locations LS=\{$ls_1,ls_2,...,ls_N$\} and LC=\{$lc_1,lc_2,...,lc_C$\} respectively, where a sensor $S_i$ is located at the location $ls_i$ and a cluster-head $C_j$ is located at the location $lc_j$, and the Sink S is placed in a distant location at a position $l_0$. Sensor nodes are grouped into clusters, where each one of them belongs to one cluster only. The cluster heads are considered to be more powerful than sensor nodes in term of processing capabilities and they have been allocated larger energy resources. Figure~\ref{fig:Network} illustrates an example of the described network architecture for one cluster. \begin{figure}[] \caption{Illustrative example of the network architecture} \centering \includegraphics[width=\linewidth]{Network2.pdf} \label{fig:Network} \end{figure} The network is periodic and operates in rounds, where each round R is exactly P seconds, and it is subdivided into m time slots, where at each time slot a sensor samples one measurement. Therefore, the maximum sampling rate ($SR_{max}$) is considered to be P/m samples per round. During the very first round, each sensor node collects data using the maximum sampling rate $SR_{max}$ and transmits the readings to the CH after each acquisition. On the CH level, when the latter receives a measurement from any sensor $S_i$ it stores the values in its memory and routes it directly to the Sink. At the end of the first round, the CH would have stored in his memory the following matrix $M$. where $n$ is equal to the current sampling rate ($SR_{max}$) in this case, and N is the number of sensors in the cluster. \begin{center} $M = \begin{bmatrix} \centering x_{1}^{1} & x_{1}^{2} & x_{1}^{3} & \dots & x_{1}^{n} \\ x_{2}^{1} & x_{2}^{2} & x_{2}^{3} & \dots & x_{2}^{n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{N}^{1} & x_{N}^{2} & x_{N}^{3} & \dots & x_{N}^{n} \end{bmatrix} $ \end{center} The CH than proceeds to computing the correlation between each pair of sensors (The number of possible pairs is $\frac{N(N-1)}{2}$). Using the obtained correlation results the CH calculates than transmit to each sensor node its new SR. A detailed explanation of how the correlation is calculated and how the new SR is determined is provided in section \ref{STCSTA}. For the next round, each sensor samples data according to its new sampling rate provided by the CH. For Instance, if the latter demands a specific sensor to reduce its sampling rate by 40$\%$, and supposing that $SR_{max}$ is equal to 50 measures/round, the sensor is supposed to sample 30 measurements instead. If each period is 10 minutes long (600s), instead of sampling a measurement every 12 seconds (600/50), the sensor would sample a measurement every 20 sec (600/30). Moreover, Knowing the duration of each period, the maximum sampling rate and the time stamp when each measurement was received, both the Sink and the CH are capable of identifying the non-sampled data, which will be replaced by "Nan" (see matrix M$^\prime$) in order to reconstruct them later at the Sink station and in order to make the computation of the correlation among sensor nodes easier for the CH as explained in section~\ref{corr}. Therefore, the stored matrix that is used to compute the correlation will actually be as shown below, where n is equal to the maximum number of samples per round ($SR_{max}$): \begin{center} $M^\prime = \begin{bmatrix} \centering x_{1}^{1} & x_{1}^{2} & x_{1}^{3} & \dots & x_{1}^{50} & Nan & x_{1}^{n} \\ x_{2}^{1} & x_{2}^{2} & x_{2}^{3} & \dots & Nan & Nan & x_{2}^{n} \\ \vdots & \vdots & \vdots & Nan & \vdots & \vdots & \vdots \\ x_{N}^{1} & x_{N}^{2} & x_{N}^{3} & \dots & x_{1}^{50} & Nan & x_{N}^{n} \end{bmatrix} $ \end{center} \subsection{Energy model} In order to compute the energy consumption of a sensor node \cite{CP3, CP4}, it is necessary to take into consideration the energy consumed by every single operation performed by the node. Generally, the consumed energy relates to four main tasks, namely, sampling, logging, processing, and radio transmission. Therefore, the energy consumption model can be defined as: \begin{equation} \label{eq:energyEq} E_{node}=E_{sampling} + E_{logging} + E_{processing} + E_{radio} \end{equation} Where $E_{sampling}$ is the energy required for sampling one value, $E_{logging}$ is the required energy to log data in the memory, $E_{proccessing}$ is the required energy to run and algorithm consenting of $N$ CPU cycles, and $E_{radio}$ is the energy required to transmit a $b$ bits packet for a distance $d$. In this article we use the energy model discussed in \cite{EnergyModel} to calculate the overall energy consumption of each sensor node. \section{The Proposed Approach (STCSTA)} \label{STCSTA} In this section, we will explain in detail, how the correlation between sensor nodes and the new sampling rates of each sensor are calculated. \subsection{Computing correlation and sampling rate allocation} \label{corr} \textbf{Algorithm 1 - line(2-14)} : After a round is completed, each sensor node would have transmitted to the cluster head a different number of measurements since the sampling rate of each one of them can be different. Nevertheless, as mentioned earlier the CH identifies the non sampled data and fill their corresponding place in the vector by a Nan value, therefore all the vectors will have the same size $n$. However, the correlation between two vectors containing Nan values cannot be computed. Therefore, each and every Nan value is replaced by the value of the first \enquote{non-Nan} value that comes before it in the same vector. For instance, in the \enquote{M$^\prime$} matrix, $x_{1}^{51}$ is Nan it will be set equal to the same value as $x_{1}^{50}$, and $x_{2}^{50}$ and $x_{2}^{51}$ are set equal to the same value as $x_{2}^{49}$, and so on. \textbf{Algorithm 1 - line(17-22)} : Afterward, the linear dependency of each pair of vectors ($v_i$,$v_j$) $\in$ M$^\prime$ is calculated using the Pearson correlation coefficient. The latter is known as the best method of measuring the association between variables of interest because it is based on the method of covariance. It gives information about the magnitude of the association, or correlation, as well as the direction of the relationship. The Pearson correlation coefficient is described in the equation~\ref{eq:pc} below, where $\mu$ and $\sigma$ are the mean and standard deviations. \begin{equation} \label{eq:pc} \rho(v_i,v_j)= \frac{1}{n-1}\times\sum_{k=1}^{n}(\frac{\overline{v_{ik}-\mu_{v_i}}}{\sigma_{v_i}})(\frac{\overline{v_{ik}-\mu_{v_j}}}{\sigma_{v_j}}) \end{equation} \begin{figure}[] \centering \includegraphics[width=\linewidth]{All.eps} \caption{Figure showing the number of moderately $\&$ highly correlated sensors (pearson correlation coefficient $\geq$ 0.5) during each one of the first 100 Periods} \label{fig:corr} \end{figure} The justification behind using the Pearson correlation can is illustrated in Figure~\ref{fig:corr}. We have used a data set of 92 sensors to generate 4 graphs that show the number of sensors that are moderately $\&$ highly correlated with 4 randomly chosen sensors during each period and for the first 100 periods. For instance, in Figure~\ref{fig:corr}(a) we notice that this randomly chosen ambient temperature sensor correlates with a large number of sensors during each period. On average it correlates with 27 sensors as the mean values shows. Same for Figure~\ref{fig:corr}(b) and (c) on average these sensors correlate with approximately 30 other sensors that are in the same cluster. However, The mean value in Figure~\ref{fig:corr}(d) is significantly lower (mean=19), in section~\ref{QR} we will see how this will reflect on the results. Heterogeneous environmental data beside other types of data such as medical data (vital signs), movement tracking data (speed, acceleration, location) and etc, are usually highly and/or moderately correlated. This correlation thus can be used in order to reduce the number of transmitted measurements by deriving values from other observed ones. This is indeed the motivation behind using correlation to adapt the sampling rate of the sensors. \textbf{Algorithm 1 - line(23-28)} :After computing the correlation value of each sensor $i$ with all the other sensors belonging to the same cluster, the CH looks for the sensor $j$ that it correlates the most with as shown in table~\ref{table:corr1}. \textbf{Algorithm 1 - line(29-38)} :Afterward, the CH counts the number of occurrences of each sensor $j$ in the second column of the table and stores them in a list according to their ascending order. \begin{table}[!htb] \centering \begin{tabular}{|c|c|c|} \hline Sensor i & \begin{tabular}[c]{@{}c@{}}Sensor j \\ (has the max \\ correlation \\ with Sensor i)\end{tabular} & Correlation degree \\ \hline 1 & 54 & 0.91 \\ \hline 2 & 7 & 0.87 \\ \hline 3 & 5 & 0.70 \\ \hline 4 & 2 & 0.96 \\ \hline 5 & 6 & 0.75 \\ \hline ... & ... & .. \\ \hline n & 32 & 0.88 \\ \hline \end{tabular} \caption{The correlation table} \label{table:corr1} \end{table} \textbf{Algorithm 1 - line(39-49)} :Starting from the first sensor $j$ in the ordered list, the CH looks in table~\ref{table:corr1} for the sensor $j$ in the first column and extract the value of its max correlation from the third column. Then the CH notifies $j$ that its sampling rate must be reduced proportionally to the correlation value. For instance, if sensor $5$ was first in the ordered list, the CH would notify it that its sampling rate for the next round must be reduced by $75\%$, since its level of correlation with sensor $6$ is 0.75. Then the sensor $j$ (in this case $5$) is flagged as already notified. Thus, for the next sensor $j$ in the ordered list, if its matching sensor $i$ is already flagged. Instead of reducing its sampling rate proportionally to the level of correlation, it is reduced by (100 - $i$'s reduction $\%$). For instance, if the next sensor $j$ in the list is $3$, it matches with sensor $5$ in table \ref{table:corr1}, therefore, it's sampling rate will be reduced by $100-75=25\%$. And so on, until the last element in the ordered array. \textbf{Algorithm 1 - line(50-56)} :However, some sensors may not appear in the second column of the table~\ref{table:corr1}, since they have not been matched with other sensors. Therefore, the CH looks for these sensor in the 1'st column of table~\ref{table:corr1}, and for each sensor $i$, it find their matching sensor $j$ in the second column, looks at how much the sampling rate was reduced for sensor $j$ and notifies sensor $i$ that its sampling rate must be reduced by (100 - sensor $j's$ reduction $\%$). The same explained operation is repeated at the end of each round. Therefore, enabling each sensor node to adjust its sampling rate according to its level of correlation with other sensors in the network. The algorithm 1 illustrates the proposed method that is implemented on the CH. \bigskip \hrule\vspace{5pt} \noindent{\bfseries Algorithm 1}\quad STCSTA.\\ \vspace{-5pt} \hrule\vspace{5pt} \label{algo1} \hspace*{\algorithmicindent} \textbf{Input:} $SRmax$ (1 sample/ X seconds) \\ \begin{algorithmic}[1] \WHILE{$Energy \neq 0$} \STATE $k \leftarrow 1$ \FOR {each sensor j in the cluster} \STATE receive the first value $v_{j}^0$ at the beginning of the round \STATE $data[j][0] \leftarrow v_{j}^0$ \STATE $lastReceived[j] \leftarrow v_{j}^0 $ \ENDFOR \WHILE{! end of round} \IF{ nothing is received from sensor j after X seconds} \STATE $data[j][k] \leftarrow lastReceived[j]$ \ELSIF {$v_{j}^n$ is received during the X seconds count} \STATE $data[j][k] \leftarrow v_{j}^n$ \STATE $lastReceived[j] \leftarrow v_{j}^n$ \ENDIF \STATE $k \leftarrow k+1$ \ENDWHILE \IF {end of round} \FOR {i=1 to N} \FOR {j=i+1 to N} \STATE $corrArray[i][j] \leftarrow PearsonCorr(data[i][:],data[j][:])$ \ENDFOR \ENDFOR \FOR {i=1 to N} \STATE $maxCorr[i][0] \leftarrow i $ \STATE $[index,value] \leftarrow max(corrArray[i][:])$ \STATE $maxCorr[i][1] \leftarrow index$ \STATE $maxCorr[i][2] \leftarrow value;$ \ENDFOR \STATE $k \leftarrow 1$ \FOR {each element i $\in$ the second column of maxCorr} \IF {i $\notin$ first column of countOcc } \STATE $count \leftarrow$ count how many times i occures in the second column of maxCorr \STATE $countOcc[k][0] \leftarrow i$ \STATE $countOcc[k][1] \leftarrow count$ \STATE $k \leftarrow k+1$ \ENDIF \ENDFOR \STATE order countOcc in ascending order according to the second column \STATE $k \leftarrow 1$ \FOR {each element j $\in$ the first column of countOcc} \STATE $match \leftarrow maxCorr[j][1]$ \IF {reduce[match-1] is empty} \STATE Notify sensor j that its sampling rate must be reduced by (maxcorr[j][2]*100)$\%$ \STATE $reduce[j-1] \leftarrow (maxcorr[j][2]*100)$ \ELSE \STATE Notify sensor j that its sampling rate must be reduced by (100 - reduce[match-1])$\%$ \STATE reduce[j-1]=100 - reduce[match-1]$\%$ \ENDIF \ENDFOR \FOR {j=1 to N} \IF {reduce[j-1] is empty} \STATE $match \leftarrow maxCorr[j][1]$ \STATE Notify sensor j that its sampling rate must be reduced by (100 - $reduce$[match-1])$\%$ \ENDIF \ENDFOR \ENDIF \ENDWHILE \end{algorithmic} \vspace{5pt}\hrule\vspace{10pt} \subsection{Analysis Study} \label{AS} The objective of this algorithm is to create and manage a sampling rate balancing system based on the correlation degree between the nodes belonging to the same cluster. The idea is to match each sensor node with the one that correlates the most with, in such a way that, if one node of the paired couple reduces heavily its sampling rate, the other one keeps it high and vice versa, allowing them to compensate one another. This compensation mechanism is crucial for the success of the reconstruction algorithm in term of minimizing the estimation error and increasing the quality of the replicated data. The latter relies on the correlation among sensor nodes in order to reconstruct the non-sampled measurements. Therefore, if highly correlated sensors are missing data simultaneously this would negatively affect the accuracy of the reconstructed measurement. When the balancing of non-sampled data is kept in check on the CH level, The reconstruction algorithm on the Sink will theoretically produce better estimations. In this section, we will illustrate an example that explains our algorithm step by step. The latter provides a better analysis of what happens at the end of each round on the cluster head to better understand why and how this compensation system works. Let us start by assuming that at the end of a given period, the CH has already computed the correlation between each pair of sensors belonging to the same cluster. In addition, we assume that the CH already matched each sensor with the one that correlates the most with and stored the results in a table similar to Table \ref{table:corr2}. The next step is to count for the sensors appearing in the second row of the table how many times it has been matched. For instance, sensor 7 has been matched 4 times, sensor 1 has been matched 2 times, and sensor 10, 9, 3, and 8 have been matched only one time. The matched sensors are then ordered in ascending order according to how many times they have been matched. the order will then be: $\{$sensor 8, sensor 3, sensor 9, sensor 10, sensor 1, sensor 7$\}$. \def\arraystretch{1.5}% \begin{table}[!htb] \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Sensor i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \begin{tabular}[c]{@{}c@{}}Sensor j (has max \\ correlation with i)\end{tabular} & 8 & 1 & 7 & 3 & 9 & 1 & 10 & 7 & 7 & 7 \\ \hline Correlation degree *$10^{-2}$ & 78 & 69 & 54 & 92 & 85 & 72 & 79 & 83 & 89 & 90 \\ \hline \end{tabular} } \caption{Table showing for each sensor its best match (maximum correlation) and the degree of correlation with this match} \label{table:corr2} \end{table} Starting from the first sensor in the list (sensor 8) the CH looks for the sensor that it matches with. Looking at table \ref{table:corr2} we see that sensor 8 matches with sensor 7. The CH then checks whether the sampling rate of sensor 7 for the next round has been decided yet. If it is not the case the CH notes that the sensor 8 must reduce its sampling rate for the next round by 83\%, since the correlation degree for sensor 8 with its match is 0.83. The CH then follows the same procedure for the next sensor in the ordered list. sensor 3, 9, and 10 they all match with sensor 7 too, and since the sampling rate of sensor 7 has not been decided yet, their sampling rate will be reduced by 54\%, 89\%, and 90\% respectively for the next round. Now the CH searches for the sensor that matches with the next sensor in the ordered list (sensor 1). Looking at table \ref{table:corr2} we see that it is sensor 8. However, the sampling rate of sensor 8 has been already decided to be reduced by 83\%, therefore instead of reducing the sampling rate of sensor 1 by 78\% it will be reduced by 100-83\%, therefore 17\% only. Same for sensor sensors 7, it matches with sensor 10, therefore its sampling rate must be reduced by 100-90\% (10\% only). The next step is to adapt the sampling rate of the sensors that do not appear in the second row of the table, or in other words they have not been matched with other sensors in the cluster. in this example, the non-matched sensors are sensor 2,4,5 and 6. Starting by sensor 2, its match is 1, therefore the sampling rate of sensor 2 for the next round must be reduced by 100-17\% (83\%), same for sensor 4,5, and 6 their sampling rate will be reduced respectively by 46\%, 11\%, and 83\%. Before computing the percentage of the reduction in sampling rate, the matched sensors are first ordered in ascending order according to how many times they have been matched. The reason behind this crucial step can be explained as follows: Let us suppose the list has not been ordered, and the CH started by sensor 7, which has been matched 4 times with 4 different sensors. The sampling rate of sensor 7 will be reduced by 79\%. Therefore, eventually, the sampling rates of sensors 3, 8, 9, and 10 will be reduced by 21\% only compared to 54\%, 83\%, 89\%, and 90\% respectively if the list was ordered. In consequence of not ordering the list first, the overall reduction in the sampling rate of the sensors would be reduced, which would lead to an increase in data transmission and energy consumption. Since sensor 7 can compensate for 4 other sensors, it is wise to leave it until the end, allowing the sensors that it matches with to reduce more their sampling rate. A summary of the results is illustrated in table \ref{table:corr3}. We notice that if a sampling rate of a particular sensor is highly reduced, the one of the sensor that it correlates the most with will be proportionally and slightly reduced (e.g. sensors 2 and 1). This balanced reduction is meant to compensate for the matched sensor since the non-sampled values will eventually be derived mostly from its best match. Similarly, if the sampling rate of a sensor is slightly reduced, this will give more freedom to its match thus allowing it to highly reduce its sampling rate (e.g. sensors 5 and 9). \def\arraystretch{1.5}% \begin{table}[!htb] \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Sensor i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline SR reduction (\%) & 17 & 83 & 54 & 46 & 11 & 83 & 10 & 83 & 89 & 90 \\ \hline \begin{tabular}[c]{@{}c@{}}Sensor j (has max\\ correlation with i)\end{tabular} & 8 & 1 & 7 & 3 & 9 & 1 & 10 & 7 & 7 & 7 \\ \hline SR reduction (\%) & 83 & 17 & 10 & 54 & 89 & 17 & 90 & 10 & 10 & 10 \\ \hline \end{tabular} } \caption{Table showing the \% of SR reduction for each sensor compared with its match} \label{table:corr3} \end{table} \subsection{Reconstruction of the non sampled data} In this section, the algorithm used to reconstruct non-sampled data is explained. As mentioned earlier, the Sink detects and replaces non sampled data with a \enquote{Nan} value. After a certain period of time, let's say M rounds, defined by the user, the sink runs a reconstruction algorithm that can replace all the \enquote{Nan} values with estimations calculated using the spatial and temporal correlation among the data reported by the sensor nodes in the network. This algorithm it is deployed on the Sink instead of the CH due to its complexity. If deployed on CH it will consume a great amount of energy. The reconstruction algorithm proposed in \cite{DynaMMo} essentially used to estimate missing data in co-evolving time series was adopted and adapted to suit our case. Assuming after M rounds, the Sink would have stored in his Sink the following data-set: \begin{center} $SinkDataSet = \begin{bmatrix} \centering x_{1}^{1} & x_{1}^{2} & x_{1}^{3} & \dots & x_{1}^{50} & Nan & x_{1}^{n*M} \\ x_{2}^{1} & x_{2}^{2} & x_{2}^{3} & \dots & Nan & Nan & x_{2}^{n*M} \\ \vdots & \vdots & \vdots & Nan & \vdots & \vdots & \vdots \\ x_{N}^{1} & x_{N}^{2} & x_{N}^{3} & \dots & x_{1}^{50} & Nan & x_{N}^{n*M} \end{bmatrix} $ \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{PM.pdf} \caption{The probabilistic model} \label{fig:PM} \end{figure} \end{center} A probabilistic model (Figure~\ref{fig:PM}) is built to estimate the expectation of missing values conditioned by the observed part. The model is built by initializing a latent variable $Z_1$, a linear mapping matrix $F$ and a projection matrix $G$, for the readers interested in how these values are initialized please refer to \cite{DynaMMo}. Afterward, using the linear mapping $F$ the algorithm can proceed to calculate the other $Z_n$ (n $\in$ [1,n*M]) by simply multiplying $Z_{n-1}*F$. Once all the values of $Z_n$ are calculated, the algorithm then estimates the observed and non-observed (Nan) values. This is achieved by multiplying each $Z_n$ by the projection Matrix $G$, which gives the predictions ($[x^{n}_1,...,x^{n}_N]$) of the values at the sampling time $n$. Using the estimations, and the observed part, the algorithm then tries to maximize the log-likelihood of the observed sequences using an EM iterative algorithm \cite{EM} in order to update $F$ and $G$ and produce more accurate predictions. The same operation is repeated with the newly computed $F$ and $G$ until the number of iteration reaches a maximum value predefined by the user, or until the log-likelihood is no longer increasing. \section{Experimental Results} \label{ER} We implemented our algorithm in addition to DPCAS~\cite{DPCAS} in a custom WSN simulator built in Matlab, and we conducted multiple experiments in order to evaluate and compare their performances. In the simulation, we used real sensor readings collected from a sensor network that was deployed at the Grand-St-Bernard pass between Switzerland and Italy~\cite{data}. The Network consisted of 23 sensors, each one of them collects 9 different environmental features with a fixed sampling rate of 1 sample every 2 minutes. We have chosen 4 out of these 9 features (ambient temperature $[C^{\circ}]$, Surface temperature $[C^{\circ}]$, relative humidity $[\%]$, and wind speed [m/s]), since the others are not complete. Environmental features are usually stationary, therefore, in addition to taking a sample every 2 minutes, and for a rigorous comparison, we set up two other scenarios, the first one, a sample is taken every 10 minutes instead, and the second one, a sample is taken every 20 minutes. In this way, the data will become \enquote{non-stationary} which makes it more realistic and harder for both algorithms to adapt to high variation in collected measurements. The raw data set (sample every 2mins) consists of 10000 readings for each sensor, for the 1st scenario we will end up with 2000 readings instead, and 1000 readings for the second one. In DPCAS the parameter $\epsilon$ defines the error tolerance of the application, the greater is $\epsilon$, the less is the amount of data that will be sampled and transmitted. However, the error of the estimated data will increase. Therefore, the value of $\epsilon$ is the level of trade-off between the quality of the replicated data and the amount of sampled and transmitted measurements. In our experimentation, we set up five different values for $\epsilon$ ranging between $0.1$ and $0.5$ and we compare our approach to DPCAS for each value of $\epsilon$. \subsection{Sampling and Transmission Reduction} \label{STreduction} In this section, we will explore and compare the effectiveness of each algorithm in reducing the number of both sampled and transmitted data in three different scenarios. As mentioned earlier, each sensor node collects 4 different environmental features (ambient temperature, surface temperature, relative humidity, and wind speed). For simplicity and better visualization of the results, all the figures will be illustrating the percentage of the aggregated sum of the data sampled and transmitted by the 23 nodes combined and for all features. \begin{figure}[!htb] \caption{Average percentage of data sampled by each sensor node} \label{Fig:Sampled} \centering \includegraphics[width=\linewidth]{All_Sensed} \end{figure} \begin{figure}[!htb] \caption{Average percentage of data transmitted by each sensor node} \label{Fig:Transmitted} \centering \includegraphics[width=\linewidth]{All_Transmitted} \end{figure} Figure~\ref{Fig:Sampled} and ~\ref{Fig:Transmitted} shows that on one hand, the bigger is the sampling interval between two consecutive measurements (higher variations in data), the greater is the average percentage of both sampled and transmitted data will be when DPCAS is deployed. On the other hand, when our approach (STCSTA) is deployed, the average percentage remains stable despite the level of variations in collected measurements, which makes it more robust, dynamic and tolerable to high variations. This is not the case for DPCAS however, its effectiveness can be significantly affected (a double-digit increase in sampled and transmitted data) depending on the type of data being collected. Moreover, Figure~\ref{Fig:Sampled} and~\ref{Fig:Transmitted} shows that STCSTA has the upper hand when it comes to reducing the number of both sampled and transmitted data. For sampled data, Figure~\ref{Fig:Sampled} shows that STCSTA outperforms DPCAS in all scenarios and for all the values of $\epsilon$. Figure~\ref{Fig:Transmitted} shows the average percentage of data transmitted by each one of the 23 nodes for both algorithms in 3 different scenarios and using different $\epsilon$ for DPCAS. The obtained results show the following: STCSTA outperforms DPCAS when $\epsilon \leq 0.2$ in all scenarios. However, for $\epsilon = 0.3$ DPCAS transmits less data in the first scenario ($SR_{max}$ = 1 sample/ 2mins), but more data in the other two scenarios ($SR_{max}$ = 1 sample/ 10 mins and 1 sample/ 20 mins). Finally, For $\epsilon = 0.4$ and $0.5$, DPCAS is slightly better in the first two scenarios. To sum it all up, the results in Figure \ref{Fig:Transmitted} show that STCSTA outperformed DPCAS 9 times, the latter outperformed STCSTA 5 times, and finally, we have 1 tie. To conclude on this, when it comes to reducing the sampling and transmission rate, thus the energy consumed by the sampling activity $E_{sampling}$ and the transmission activity $E_{radio}$ STCSTA is more effective than DPCAS. \subsection{Energy Consumption} In this section, we present a comparison between the average energy consumed by the 23 sensor nodes when DPCAS and STCSTA are deployed. \begin{figure}[!htb] \caption{Average energy consumption of each sensor node} \label{EnergyConsumption} \centering \includegraphics[width=\linewidth]{Energy_All} \end{figure} Previously, in section \ref{STreduction}, the obtained results clearly show that the $E_{radio}$ and $E_{sampling}$ are less when STCSTA is deployed since the amount of sampled and transmitted data is directly related to the energy consumed by the sampling and transmitting activities. However, according to Equation~\ref{eq:energyEq}, we still need to calculate $E_{logging}$ and $E_{Processing}$. This is where our approach shows a clear advantage. Knowing that in DPCAS an algorithm must be deployed on the node that handles 4 different sensors at a time. The node needs to perform reading and writing in the memory, and it needs to compute mathematical operation using the CPU. Therefore, the node will be consuming additional energy ($E_{logging}$ and $E_{Processing}$). However, for STCSTA, the node does not have to run an algorithm, nor to perform read and write in the memory, it simply collects a measurement using the integrated sensors, and directly transmits it to the CH. Therefore, no additional energy consumption is required. Figure~\ref{EnergyConsumption} shows the average energy in Joule consumed by each one of the 23 deployed nodes. It is clear that our approach consumes approximately from $20\%$ up to $60\%$ less energy than DPCAS depending on the scenario and the value of $\epsilon$. \subsection{Comparison with a baseline method} The previously described results demonstrated that our approach STCSTA outperforms DPCAS in terms of energy preservation. The DPCAS algorithm in \cite{DPCAS} was compared to two other approaches that use a similar technique, namely EDSAS \cite{EDSAS} and ASTCP \cite{ASTCP}. As mentioned in section \ref{RW}, the ASTCP algorithm was inspired by the EDSAS. Moreover, the DPCAS algorithm was inspired by both ASTCP and EDSAS. In this section, we will use the EDSAS as a baseline for comparison since it was the root algorithm that inspired both ASTCP and DPCAS. Table \ref{table:baseline} below shows the average energy consumed by each node in all scenarios and for the same value of $\epsilon$=0.1 used in \cite{DPCAS}. The obtained results are fairly similar to the ones obtained in \cite{DPCAS} and our approach remains better. \def\arraystretch{1.5}% \begin{table}[!htb] \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Algorithm & \multicolumn{3}{c|}{STCSTA} & \multicolumn{3}{c|}{DPCAS} & \multicolumn{3}{c|}{EDSAS} \\ \hline \begin{tabular}[c]{@{}c@{}}Sampling Rate\\ 1 sample/ x min\end{tabular} & x=2 & x=10 & x=20 & x=2 & x=10 & x=20 & x=2 & x=10 & x=20 \\ \hline Energy (J) & 6.06 & 1.21 & 0.59 & 13.06 & 2.84 & 1.47 & 13.38 & 2.92 & 1.52 \\ \hline \end{tabular} } \caption{Table comparing STCSTA and DPCAS to the baseline EDSAS} \label{table:baseline} \end{table} \subsection{The quality of the replicated data} \label{QR} In order to measure the quality of the final set of data, we use the accuracy of the estimations as the validation criteria. Specifically, we use the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE) as an accuracy metric. Table~\ref{table:quality} shows the RMSE and MAE of the estimated data for the three scenarios. For ambient temperature, surface temperature and relative humidity the errors are low. This is due to the fact that the spatial-temporal correlation of these features is strong, so the estimation algorithm can obtain an accurate and solid relationship based on mining correlation rules. Table~\ref{table:quality} also shows that the error increases when the sampling interval widens. The bigger is the sampling interval, the weaker is the temporal correlation, therefore the harder is for the estimation algorithm to accurately estimate values. For Wind direction, the errors increase significantly but they are still proportionally low compared with the range of value for the wind speed (between 0 and 350 m/s). Wind speed has no spatial correlated with any other feature. Moreover, the wind speed value varies significantly between one sample and the other as shown in Figure \ref{fig:WspeedR}, therefore the temporal correlation is weak as well, that is why it has the highest error among other features. \def\arraystretch{1.5}% \begin{table}[!htb] \centering \caption {Quality of the reconstructed data} \label{tab:Error2000} \label{table:quality} \resizebox{\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c|} \cline{3-6} & & \textbf{Ambient Temp} & \textbf{Surface Temp} & \textbf{Relative Humidity} & \textbf{Wind Direction} \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}1 sample/2 mins\end{tabular}}}} & \textbf{RMSE} & 1.12 & 1.33 & 2.7 & 16.5 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{MAE} & 0.71 & 0.91 & 1.89 & 8.78 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}1 sample/10 mins\end{tabular}}}} & \textbf{RMSE} & 1.26 & 1.56 & 3.68 & 18.26 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{MAE} & 0.74 & 1.09 & 2.55 & 9.13 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}1 sample/20 mins\end{tabular}}}} & \textbf{RMSE} & 1.43 & 1.95 & 4.78 & 23.53 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{MAE} & 0.87 & 1.43 & 3.21 & 11.93 \\ \hline \end{tabular} } \end{table} \begin{figure}[!htb] \caption{Reconstructed ambient temperature signal} \label{fig:AtempR} \centering \includegraphics[width=\linewidth]{Atemp_Raw_vs_Reconstructed} \end{figure} \begin{figure}[!htb] \caption{Reconstructed surface temperature signal} \label{fig:StempR} \centering \includegraphics[width=\linewidth]{Stemp_Raw_vs_Reconstructed} \end{figure} \begin{figure}[!htb] \caption{Reconstructed relative humidity signal} \label{fig:RhumR} \centering \includegraphics[width=\linewidth]{RHum_Raw_vs_Reconstructed} \end{figure} \begin{figure}[!htb] \caption{Reconstructed wind speed signal} \label{fig:WspeedR} \centering \includegraphics[width=\linewidth]{WSpeed_Raw_vs_Reconstructed} \end{figure} Figures \cref{fig:AtempR,fig:StempR,fig:RhumR,fig:WspeedR} shows a reconstructed signals for ambient temperature, surface temperature, relative humidity, and wind speed respectively. As shown in the figures, the data estimation (reconstruction) algorithm has been able to capture both the dynamics of the signal as well as the correlation across given inputs, therefore achieving a very satisfying reconstruction of the signals. To conclude on the quality of the replicated data, simulation results presented in this section, demonstrated that the Sink is capable of reproducing the \enquote{non-sampled} data with a tolerable error margin. Thus, using our approach a sensor node can significantly reduce its sampling rate without affecting the integrity of the data. \subsection{The Effect of The Sampling Strategy on Error Minimization} The previous results have evaluated the efficiency of our proposed approach (STCSTA) in terms of reducing data transmission and energy consumption as well as the quality of the data replicated on the Sink. However, as previously explained in section \ref{AS}, the objective of our algorithm is to guarantee that the highly correlated sensors are not skipping data sampling simultaneously in order to reduce the reconstruction error. That was in theory, Therefore, in this section, we put the theory into practice in order to justify this claim. Instead of building a list of matching sensors, ordering the list, and reducing the sampling rate of each sensor proportionally to its match. We eliminated the steps from line 30 and upward in Algorithm 1, only to allow a sensor to reduce its sampling rate according to its highest degree of correlation. For instance, let's assume that the sensor 1 has the highest correlation degree with sensor 5 (0.8). Without checking whether sensor 5 has already reduced its sampling rate or not, it will automatically reduce it by $80\%$. There is a chance that sensor 5 has already reduced its sampling rate lets say by $70\%$. Thus, both sensor 5 and 1 will skip sampling simultaneously which would, in theory, affect negatively the reconstruction algorithm, which will lead to an increase in the reconstruction error. We will be calling this method \enquote{The exaggerated sampling reduction} method. Table \ref{table:errortable2} shows the \% of increase in the reconstruction error when this method is applied. We notice that the Reconstruction error increases significantly in all scenarios and for all environmental features, which justifies our controlled sampling strategy. \def\arraystretch{1.5}% \begin{table}[!htb] \centering \caption {Percentage of increase in reconstruction error (the exaggerated sampling reduction method)} \label{table:errortable2} \resizebox{\linewidth}{!}{ \begin{tabular}{cc|c|c|c|c|} \cline{3-6} & & Ambient Temp & Surface Temp & Relative Humidity & Wind Direction \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{1 sample/2 mins}} & RMSE & 16.9 \% & 34.5 \% & 20 \% & 45.3 \% \\ \cline{2-6} \multicolumn{1}{|c|}{} & MAE & 12.6 \% & 41.7 \% & 16.93 \% & 23.4 \% \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{1 sample/10 mins}} & RMSE & 26.9 \% & 44.8 \% & 35.8 \% & 52.7 \% \\ \cline{2-6} \multicolumn{1}{|c|}{} & MAE & 39.1 \% & 52.3 \% & 29.4 \% & 75.2 \% \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{1 sample/20 mins}} & RMSE & 25.8 \% & 48.2 \% & 50.2 \% & 36.0 \% \\ \cline{2-6} \multicolumn{1}{|c|}{} & MAE & 25.2 \% & 44.0 \% & 45.7 \% & 59.2 \% \\ \hline \end{tabular} } \end{table} \subsection{Scalability and Limitations } Obviously, the scalability of such a network depends on the computational power of the CH and its memory capacity. A more powerful CPU and big memory size mean that the CH could handle a large number of sensors simultaneously. The weaker is the CPU and the smaller is the memory size, the fewer nodes a CH can handle. A great number of devices that can be used as a CH are currently available in the market, they all have different features and characteristics. One can find cheap less powerful CH device for personal use or an expensive and powerful device for commercial use. Therefore, the choice of the CH depends on the size of the network a user wants to deploy. A network consisting of thousands of nodes will certainly need a powerful CH. However, a network consisting of a few hundred or tens of nodes could work just fine with a less powerful CH. \begin{figure}[!htb] \caption{Memory size needed for the first part of the Algorithm (line 1-28)} \label{Fig:116} \centering \includegraphics[width=\linewidth]{Memo_Alg_1-19} \end{figure} \begin{figure}[!htb] \caption{Memory size needed for the second part of the Algorithm (line 23-57)} \label{Fig:17up} \centering \includegraphics[width=\linewidth]{Memo_Alg_19Up} \end{figure} Our proposed algorithm is not very complex though, it has a complexity that is linear in time (O(n)). This linear complexity allows the CH to handle a large number of nodes with minimal computational power. Regarding the memory size required by the STCSTA, assuming that the number of nodes in the cluster is $N$, and each value is encoded into 8bytes. \begin{itemize} \item $8 \times (N(SR_{max}+\frac{1}{2}N + 4)+1)$ bytes is the memory size required by the Algorithm1 from line 1-28. Figure~\ref{Fig:116} shows the memory size needed by the CH in function of $SR_{max}$ and the number of nodes belonging to the cluster. \item $8 (\times 6N+1$) bytes is the memory size required by the Algorithm1 from line 23-57 if we assume that the matching sensors are at maximum equal to the number of sensors in the cluster. Figure~\ref{Fig:17up} shows the maximum memory size needed by the CH in function of the number of nodes belonging to the cluster. \end{itemize} The maximum memory size required by the CH is $8 \times Max(N(SR_{max}+\frac{1}{2}N + 4)+1 , 6N+1)$ bytes, since the values stored in the first part of the Algorithm (1-17), could be cleared once the sensors have been matched (Algorithm 1, line 17-28). Nevertheless, the greater is the number of nodes belonging to the same cluster, the better is the correlation among these nodes, the fewer data a sensor will sample and transmit which eventually leads to less energy consumption. Therefore, the number of sensors belonging to the same cluster should be maximized in function of its computational and memory resources. As for the limitation of our proposed algorithm, it is evident when there is no or little correlation among the collected measurements, the sampling rate of the sensors will be always kept high. Since the role of this algorithm is to minimize the sampling rate of the sensor node, it will not be as efficient as it should be. \section{Conclusion} We proposed in this paper a sampling and transmission rate adaptation algorithm for cluster-based sensor networks. This algorithm is deployed on the Cluster-Head (CH) and it operates in rounds. The latter controls the sampling rate of each individual sensor node by increasing it or decreasing it according to its spatial correlation with other sensors in the network. Moreover, we adopted and adapted a data reconstruction algorithm that is implemented on the Sink station. The latter can identify the \enquote{non-sampled} data that are not collected due to a decrease in the sampling rate of a specific sensor and it estimates them using an EM iterative approach that is capable of capturing the temporal and spatial correlation among the reported measurements. We presented experimentation that we have conducted on real sensor data of a network that was deployed at the Grand-St-Bernard pass located between Switzerland and Italy. We have compared our approach with a recent data reduction technique that combines both adaptive sampling and transmission reduction. The obtained results demonstrate that our proposal is better at reducing the energy consumption of the sensor node, thus extending the operational lifetime of the network while preserving the integrity and the quality of the data. For future work, we aim to tune better the Algorithm deployed on the CH by incorporating other attributes to determine the optimal sampling rate of each individual sensor. Moreover, we will explore the possibility of adding a compression phase between the CH and the workstation in order to reduce more the amount of transmitted data. \section{Acknowledgement} This work is partially funded by the EIPHI Graduate School (contract "ANR-17-EURE-0002"), the France-Suisse Interreg RESponSE project, Lebanese University Research Program (Number: 4/6132), and EPSRC PETRAS 2 (EP/S035362/1). \bibliographystyle{IEEEtran}
{ "timestamp": "2019-04-16T02:18:24", "yymm": "1904", "arxiv_id": "1904.06705", "language": "en", "url": "https://arxiv.org/abs/1904.06705" }
\section{Introduction} \label{sec:Intro} As a subclass of core-collapse supernovae (CCSNe), Type Ic SNe (SNe~Ic) have long been believed to be the results of explosions of massive stars that had lost all of their hydrogen and all (or almost all) of their helium envelopes, thereby showing no hydrogen and helium absorption lines (see \citealt{Fil1997,Mat2011,Gal2017} for reviews). The light curves (LCs), spectra, and physical parameters of SNe~Ic are rather heterogeneous. According to their peak luminosities they can be classified into three subclasses: ordinary SNe~Ic, luminous SNe~Ic, and superluminous SNe~Ic ({SLSNe~Ic; \citealt{Qui2011,Gal2012,Gal2018}).\footnote{See, e.g., Figure 13 of \citet{Nich2015} and Figure 3 of \citet{DeCia2018}. \citet{DeCia2018} show that there is a continuous luminosity function from faint SNe~Ic to SLSNe-I. We call SNe that are dimmer than SLSNe but brighter than canonical SNe Ia ``luminous SNe"; they are similar to the luminous optical transients presented by \citet{Arc2016}.} Based on their spectra around peak brightness, SNe~Ic can be divided into normal SNe~Ic and ``broad-lined SNe~Ic (SNe~Ic-BL)" \citep{Woo2006}. And, according to their kinetic energy ($E_{\rm K}$), they can be split into normal SNe~Ic ($E_{\rm K} \lesssim 2\times10^{51}$ erg) and ``hypernovae" ($E_{\rm K} \gtrsim 2\times10^{51}$ erg; \citealt{Iwa1998}). A minority of SNe~Ic-BL are associated with gamma-ray bursts (GRBs) or X-ray flashes (XRFs) and were called ``GRB-SNe" (see \citealt{Woo2006,Hjo2012,Cano2017}, and references therein). Study of the energy sources of SNe~Ic-BL and SLSNe~I/Ic is a very important part of time-domain astronomy. The LCs of normal SNe~Ic can be explained by the $^{56}$Ni cascade decay model ($^{56}$Ni model for short; \citealt{Col1969,Col1980,Arn1982}), while the energy sources of luminous SNe and SLSNe are still being debated: they cannot be explained by the $^{56}$Ni model (e.g., \citealt{Qui2011,Gal2012,Inse2013}), so instead researchers often invoke the magnetar model \citep{Mae2007,Kas2010,Woos2010,Cha2012,Cha2013,Des2012b,Inse2013,Chen2015,Wang2015a,Wang2016b,Dai2016}, involving nascent highly magnetized neutron stars (magnetic strength $B_{p} \approx 10^{13}$--$10^{15}$ G)\footnote{It was suggested that a magnetar with $B_{p} \approx 10^{16}$ G can power SNe~Ic-BL \citep{Wang2016a,Wang2017a,Wang2017b,Chen2017}.}, and the circumstellar interaction model \citep{Che1982,Che1994,Chu1994,Gin2012,Cha2012,Cha2013,Liu2018}, in which ejecta kinetic energy is converted to radiation. In this paper, we study the very nearby Type Ic SN~2007D. The luminosity distance $D_L$ derived from the Tully-Fisher relation and the redshift $z$ of the host galaxy of SN~2007D (UGC~2653) are $106_{-8.5}^{+2}$\,Mpc (from NED)\footnote{http://ned.ipac.caltech.edu/cgi-bin/nDistance?name=UGC+02653 .} and $0.023146\pm0.000017$ (recession velocity $6939\pm5$ km s$^{-1}$; \citealt{Weg1993}), respectively. The photospheric velocity ($v_\mathrm{ph}$) of SN~2007D inferred from the Fe\,{\sc ii}$\lambda$5169 absorption line about 8 days before $V$-band maximum brightness is $\sim 13,350 \pm 4000$ km s$^{-1}$ \citep{Mod2014,Mod2016}, smaller than the canonical value of SNe~Ic-BL ($\sim 22,200 \pm 9400$ km s$^{-1}$; \citealt{Mod2016}) and the average values of SLSNe~I ($\sim 15,000 \pm 2600$ km s$^{-1}$; \citealt{Liu2017b}) 10 days after peak brightness. SN~2007D was heavily extinguished by its highly inclined ($\sim 70^{\circ}$; \citealt{Dro2011}) host galaxy UGC~2653 ($E(B-V)_\mathrm{host} = 0.91 \pm 0.13$ mag; \citealt{Dro2011}) and the Milky Way ($E(B-V)_\mathrm{Gal} = 0.335$ mag; \citealt{Sch1998}). By performing the extinction correction, \citet{Dro2011} found that the $R$-band and $V$-band peak absolute magnitudes ($M_{R,{\mathrm{peak}}}$ and $M_{V,{\mathrm{peak}}}$) of SN~2007D are $\sim -20.65 \pm 0.55$ mag and $< -20.54$ mag, respectively, significantly brighter than all other SNe~Ibc.\footnote{The average peak absolute magnitude of two dozen nearby ($D_L \lesssim 60$ Mpc) SNe~Ibc discovered by the Lick Observatory Supernova Search (LOSS) is $-16.09 \pm 0.23$ mag (with a 1$\sigma$ dispersion of 1.24 mag; \citealt{Li2011}). The average peak absolute magnitude of nearby ($D_L \lesssim 150$ Mpc) SNe~Ic and SNe~Ic-BL observed by the Palomar 60-inch telescope (P60) are $-17.4 \pm 0.4$ mag and $-18.3 \pm 0.6$ mag, respectively \citep{Dro2011}. Among these SNe~Ic and SNe~Ic-BL, SN~2007D is the most luminous.} While \citet{Gal2012} suggested that the SLSN threshold can be set at $-21$ mag, \citet{Qui2018} and \citet{DeCia2018} re-examined the threshold of SLSNe and suggested it is $\sim -20.5$ mag, as adopted by \citet{Qui2014}. According to the latter threshold, SN~2007D is a SLSN. However, the extinction values of the host galaxy of SN~2007D and the Milky Way are rather uncertain. For example, using the values of \citet{Sch2011} for the foreground extinction\footnote{http://irsa.ipac.caltech.edu/applications/DUST/} (which are roughly 20--30\% lower than those of \citealt{Sch1998}) and the $K$-corrected $V$-band LC of SN~2007D, we find a peak absolute magnitude $M_\mathrm{V,peak}$ of only $\sim -20.06$ mag\footnote{This arises from a peak apparent magnitude of $m_\mathrm{V,peak} = 15.06 \pm 0.36$, which includes a foreground extinction of 0.79 mag and host extinction of 2.50 mag, and the Tully-Fisher distance modulus on the NED website (http://ned.ipac.caltech.edu/cgi-bin/nDistance?name=UGC+02653) of $35.12 \pm 0.47$ mag.}, $\sim 0.48$ mag dimmer than the value inferred by \citet{Dro2011} ($< -20.54$ mag). In this case, SN~2007D is a luminous SN whose peak luninosity is between that of ordinary SNe and SLSNe (see, e.g., \citealt{Arc2016}). We call these two different LCs ``Case A" and ``Case B" throughout this paper. The energy source of SN~2007D has not yet been definitively determined. By assuming that the luminosity evolution of SN~2007D was powered by $^{56}$Ni decay and supposing that the ejecta velocity is $\sim 2\times 10^{9}$ cm s$^{-1}$, \citet{Dro2011} inferred that the mass of $^{56}$Ni synthesized in the explosion and the value of $(M_{\mathrm{ej}}/{\rm M}_\odot)^{3/4}(E_{\mathrm{K}}/10^{51} \mathrm{erg})^{-1/4}$ are $\sim 1.5 \pm 0.5~$M$_\odot$ and $\sim 1.5_{-0.5}^{+0.8}~$M$_\odot$, respectively (see Table 6 of \citealt{Dro2011}). Supposing $v_\mathrm{sc} \approx 2\times 10^{9}$ cm s$^{-1}$ for the scale velocity of the ejecta and solving the equation $(M_{\mathrm{ej}}/\mathrm{M}_\odot)^{3/4}(E_{\mathrm{K}}/10^{51} \mathrm{erg})^{-1/4} = 1.5_{-0.5}^{+0.8}$, however, we find that the mass of the ejecta $M_{\mathrm{ej}} = 3.5_{-1.95}^{+4.7}~$M$_\odot$. Then the ratio of the $^{56}$Ni mass to the ejecta mass ($M_{\mathrm{Ni}}/M_{\mathrm{ej}}$) is $\sim 0.43_{-0.31}^{+0.86}$, significantly larger than the upper limit ($\sim 0.20$) determined by numerical simulations \citep{Ume2008}, suggesting that the photometric evolution of SN~2007D cannot be explained by the $^{56}$Ni model. Therefore, the question of the energy source of SN~2007D deserves detailed study. In fact, \citet{Gal2012} had discussed SN~2007D and SN~2010ay as ``transitional'' events between SLSNe-I and SNe~Ic and suggested that a ``central engine" may power their large observed peak luminosities. However, no quantitative research on this idea has been performed to date. In this paper, we investigate in detail the energy-source mechanisms powering the luminosity evolution of SN~2007D. In Section \ref{sec:fit}, we employ the $^{56}$Ni model, the magnetar model, as well as the magnetar+$^{56}$Ni model to fit the $R$-band LC and the $V-R$ color evolution of SN~2007D. Discussion and conclusions are presented in Sections \ref{sec:dis} and \ref{sec:con}, respectively. \section{Modeling the Multiband LCs of SN~2007D} \label{sec:fit} In this Section, we employ semianalytic models to fit the $R$-band LC and the $V-R$ color evolution of SN~2007D.\footnote{The $R$, $V$, and $V-R$ LCs are presented by \citet{Dro2011}. By fitting two of these three LCs, the remaining one is also determined. We choose to fit the $R$ and $V-R$ LCs.} To fit these LCs, we neglect the dilution effect (e.g., \citealt{Des2012a}) of the ejecta and assume that the SN radiation is black-body emission: $F(\nu,t)= (2{\pi}h{\nu}^3/c^2)(e^{\frac{h{\nu}}{k_{\rm b}T(t)}}-1)^{-1}(R^2/D_L^2)$, where $T(t)= (L(t)/4\pi\sigma(v_{\mathrm{sc}}t)^2)^{1/4}$ is the black-body temperature and $L(t)$ is the bolometric luminosity of a SN. Using the Vega magnitude system ($\mathrm{mag}(\nu,t) = -2.5\,\mathrm{log}_{10}F(\nu,t)-48.598-zp(f_{\nu})$) and Table A2 of \citet{Bes1998}, we can convert the fluxes to magnitudes.\footnote{In Table A2 of \citet{Bes1998}, note that ``$zp(f_{\lambda}$)" (in the fourth line) and ``$zp(f_{\nu}$)" (in the fifth line) must be exchanged.} Hence, our semianalytic models should simultaneously reproduce the bolometric LC, the temperature evolution, and the multiband LCs of SN~2007D. In adopting a simple black-body model, we neglect the blue-ultraviolet (UV) suppression which yields a dimmer blue-UV luminosity and a brighter optical luminosity. To get the best-fit parameters and the range, we adopt the Markov Chain Monte Carlo (MCMC) method. \subsection{The $^{56}$Ni-Only Model} We first employ a semianalytic $^{56}$Ni model to fit the $R$ and $V-R$ LCs. The LCs reproduced by this model are determined by the optical opacity $\kappa$, the ejecta mass $M_{\mathrm{ej}}$, the initial scale velocity of the ejecta $v_{\mathrm{sc0}}$, the $^{56}$Ni mass $M_{\mathrm{Ni}}$, the gamma-ray opacity of $^{56}$Ni decay photons $\kappa_{\gamma,\mathrm{Ni}}$, and the moment of explosion $t_\mathrm{expl}$. We suppose that the initial kinetic energy of the ejecta ($E_{\mathrm{K0}} = 0.3\,M_{\mathrm{ej}}v_{\mathrm{sc0}}^{2}$) is provided by the neutrino-driven mechanism. Then the upper limit of $E_{\mathrm{K0}}$ is set to be $2.5 \times 10^{51}$ erg since the upper limit of $E_{\mathrm{K0}}$ provided by the neutrino-driven mechanism is (2.0--2.5) $\times 10^{51}$ erg; \citealt{Jan2016}. The upper limit of $v_{\mathrm{sc0}}$ is adopted to be $\sim 16,000$ km s$^{-1}$. Without this constraint, MCMC would favor a $v_{\mathrm{sc0}}$ value that yields a photospheric velocity significantly larger than the observed one ($\sim 13,350 \pm 4000$ km s$^{-1}$) since there is only one velocity point. The theoretical $^{56}$Ni-powered $R$ and $V-R$ LCs are shown in Figure \ref{fig:2007D-Ni}. The parameters of the $^{56}$Ni model are listed in Table \ref{tab:para}. To match the post-peak $R$-band LC, the value of $\kappa_{\gamma,\mathrm{Ni}}$ must be $1.12_{-0.86}^{+4.01}$ cm$^{2}$ g$^{-1}$, larger than the canonical value of 0.027 cm$^{2}$ g$^{-1}$ (e.g., \citealt{Cap1997,Maz2000,Mae2003}). For Case A, the inferred $^{56}$Ni mass is $\sim 2.66_{-0.15}^{+0.17}$\,M$_\odot$. This value is significantly larger than that ($\sim 1.5$\,M$_\odot$) derived from a relation linking the $R$-band peak magnitude $M_{R,\mathrm{peak}}$ and the $^{56}$Ni mass yield used by \citet{Dro2011}. This is because higher peak luminosity and temperature result in a bluer photosphere when the SN peaks and the ratio of the UV flux to the $R$-band flux is larger than that of the normal SNe~Ibc, and more $^{56}$Ni is needed for powering the SN peak. As shown in Table \ref{tab:para}, the derived ejecta mass is $1.39_{-0.33}^{+0.19}$\,M$_\odot$, smaller than the mass of $^{56}$Ni. For Case B, the inferred values of the ejecta mass and $^{56}$Ni mass are $1.45_{-0.32}^{+0.17}$\,M$_\odot$ and $1.61_{-0.07}^{+0.08}$\,M$_\odot$, respectively. The $^{56}$Ni mass is also larger than the ejecta mass. We note that the value of $\kappa$ can vary from 0.06 to 0.20 cm$^2$ g$^{-1}$ (see the references listed by \citealt{Wang2017c}) and was fixed here to be 0.07 cm$^{2}$ g$^{-1}$. A larger (smaller) value would result in a smaller (larger) value of $M_{\mathrm{ej}}$ (see, e.g., \citealt{Wang2015b,Nagy2016,Wang2017c}). Nevertheless, the inferred ratio of the $^{56}$Ni mass to the ejecta mass would still be larger than 1.36 (for Case A) or 0.90 (for Case B) even if $\kappa = 0.06$ cm$^{2}$ g$^{-1}$. These results indicate that the $^{56}$Ni model cannot explain the multiband LCs of SN~2007D and that there might be other energy sources involved, because the ratio of the $^{56}$Ni mass to the ejecta mass cannot be larger than $\sim 0.20$ \citep{Ume2008}. \subsection{The Magnetar Model} \label{subsec:fit2} Since the modeling disfavors the $^{56}$Ni-only model, alternative models must be considered. Here we use the magnetar model to fit the $R-$ band LC and the color evolution of SN~2007D. The free parameters of the magnetar model are $\kappa$, $M_{\mathrm{ej}}$, $v_{\mathrm{sc0}}$, the magnetic strength $B_{p}$, the magnetar's initial rotational period $P_{0}$, the gamma-ray opacity of magnetar photons $\kappa_{\gamma,\mathrm{mag}}$, and $t_\mathrm{expl}$. The $R$ and $V-R$ LCs reproduced by the magnetar model are shown in Figure \ref{fig:2007D-mag} and the corresponding parameters are listed in Table \ref{tab:para}. We find that a magnetar with $P_0 \approx 7.28_{-0.21}^{+0.21}$\,ms (or $9.00_{-0.42}^{+0.32}$\,ms for Case B) and $B_p \approx 3.10_{-0.35}^{+0.36}\times10^{14}$\,G (or $2.81_{-0.44}^{+0.43}\times10^{14}$\,G for Case B) can power the multiband LCs of SN~2007D. \subsection{The Magnetar Plus $^{56}$Ni Model} \label{subsec:fit3} It has been proposed that $\lesssim 0.2$\,M$_\odot$ of $^{56}$Ni can be synthesized by an energetic SN explosion \citep{Nom2013}. We employ the magnetar plus $^{56}$Ni model whose free parameters are $\kappa$, $M_{\mathrm{ej}}$, $v_{\mathrm{sc0}}$, $B_{p}$, $P_{0}$, $\kappa_{\gamma,\mathrm{mag}}$, $M_{\mathrm{Ni}}$, $\kappa_{\gamma,\mathrm{Ni}}$, and $t_\mathrm{expl}$. It can be expected that the contribution of such a small amount of $^{56}$Ni is substantially less than that of a magnetar. Therefore, the LCs reproduced by the magnetar and the magnetar plus $\lesssim 0.2$\,M$_\odot$ of $^{56}$Ni models cannot be distinguished if we tune the parameters. We add $0.2$\,M$_\odot$ of $^{56}$Ni (see also \citealt{Met2015,Bers2016} for SN~2011kl) and fit the LCs. The LCs produced by such a magnetar ($P_0 \approx 7.43_{-0.21}^{+0.22}$\,ms for Case A, $9.02_{-0.57}^{+0.44}$\,ms for Case B; $B_p \approx 3.04_{-0.37}^{+0.37}\times10^{14}$\,G for Case A, $2.49_{-0.46}^{+0.49}\times10^{14}$\,G for Case B) plus $0.2$\,M$_\odot$ of $^{56}$Ni as well as the LCs powered by $0.2$\,M$_\odot$ of $^{56}$Ni are plotted in Figure \ref{fig:2007D-magni}, and the corresponding parameters are listed in Table \ref{tab:para}. While the photometric evolution of SN~2007D can also be explained by the magnetar plus $^{56}$Ni model, the contribution of $^{56}$Ni can be neglected. \section{Discussion} \label{sec:dis} \subsection{Bolometric LC and the Temperature Evolution of SN~2007D} In Section \ref{sec:fit}, we used several models to fit the $R$ and $V-R$ LCs of SN~2007D. To obtain more information, we plot the theoretical bolometric LCs and the temperature evolution; see Figure \ref{fig:2007D-BoloTV}. The derived temperature of SN~2007D \text{in Case A} is rather high, $>10,000$ K when $t-t_\mathrm{peak,bol}\leq 10$ days ($t_\mathrm{peak,bol}$ of SN~2007D is $\sim 10$ days), comparable to that of SLSNe (see, e.g., Figure 5 of \citealt{Inse2013}) and significantly higher than that of ordinary SNe~Ic at the same epoch ($\lesssim 7,000$ K; \citealt{Liu2017b}). The derived temperature of SN~2007D at the same epoch in Case B is 8000--9000~K, between that of SLSNe-I and ordinary SNe~Ic. We compare the spectrum of SN~2007D with spectra of three SLSNe-I (LSQ14bdq, SN~2016aj, and SN~2015bn) at the same epoch (see Figure \ref{fig:spec}), finding that SN~2007D is redder than these SLSNe. This result indicates that the temperature of SN~2007D is lower than the temperature of these three SLSNe-I and that Case B is favored --- that is, SN~2007D might be a luminous SN~Ic rather than a SLSN-I. \subsection{Physical Parameters of the Ejecta of SN~2007D and the Magnetar} The physical properties of the ejecta of SN~2007D deserve further discussion. We focus on the properties derived from the magnetar model and the magnetar plus $^{56}$Ni model since the $^{56}$Ni-only model was disfavored. The ejecta mass of SN~2007D inferred by the magnetar plus $^{56}$Ni model is $\sim 1.3\,$M$_\odot$, smaller that the average values of the ejecta of SNe~Ic and Ic-BL, but at the lower end of the mass distribution of magnetar-powered SLSNe \citep{Nich2015,Liu2017a,Yu2017,Nich2017}. The inferred ejecta mass suggests that the progenitor of SN~2007D might be in a binary system and experienced mass transfer and/or line-driven wind emission. A low mass results in a rather short rise time ($t_\mathrm{peak,bol} \approx 10$ days), comparable to that of SN~1994I which is a SN~Ic (e.g., \citealt{Nom1994,Iwa1994,Fil1995,Sau2006}) and that of several luminous ``gap-filler" optical transients bridging ordinary SNe and SLSNe \citep{Arc2016}. By adopting the equation $\tau_{m}=(2{\kappa}M_{\mathrm{ej}}/{\beta}v_{\mathrm{sc}}c)^{1/2}$ (where $\beta = 13.8$ is a constant; \citealt{Arn1982}), we conclude that the diffusion timescale $\tau_{m}$ is $\sim$ 8.3 days. The values of $P_0$ and $B_{p}$ of the magnetar are $\sim 7.4$\,ms (or $\sim 9.0$\,ms for Case B) and $3\times 10^{14}$\,G (or $2.5\times 10^{14}$\,G for Case B), respectively. Hence, the magnetar's initial rotational energy $E_{\mathrm{rot,0}} \approx 2 \times 10^{52} \left({P_{0}}/{1~\mathrm{ms}}\right)^{-2}$ erg and spin-down timescale $\tau_{p} = 5.3\,(B_{p}/10^{14}~{\rm G})^{-2}(P_0/1~{\rm ms})^2~{\rm yr}$ are $\sim 3.65 \times 10^{50}$ (or $\sim 2.47 \times 10^{50}$) erg (a factor of 5$-$7 smaller than $E_{\mathrm{K0}}$) and 32.3 (or 68.7) days, respectively. \section{Conclusions} \label{sec:con} SN~2007D is a very nearby SN~Ic whose luminosity distance and redshift are $106_{-8.5}^{+2}$ Mpc and $0.023146 \pm 0.000017$, respectively. \citet{Dro2011} demonstrated that SN~2007D is a very luminous SN~Ic: $M_{R,{\mathrm{peak}}} \approx -20.65 \pm 0.55$ mag and $M_{V,{\mathrm{peak}}} < -20.54$ mag, which are brighter than the SLSN threshold ($-20.5$ mag) given by \citet{Qui2018} and \citet{DeCia2018}, and inferred that the $^{56}$Ni powering the luminosity evolution of SN~2007D is $1.5 \pm 0.5 $\,M$_\odot$. Adopting the values of \citet{Sch2011} for the foreground extinction and the $K$-corrected $V$-band LC of SN~2007D, however, we found a peak absolute magnitude $M_\mathrm{V,peak}$ of only $\sim -20.06$ mag, $\sim 0.48$ mag dimmer than the LCs of \citet{Dro2011}. Our simple estimate shows that the ratio of $^{56}$Ni to the ejecta mass of SN~2007D is unrealistic large ($\sim 0.43_{-0.31}^{+0.86}$). To verify the validity of the $^{56}$Ni cascade decay model, we use the $^{56}$Ni model to fit its $R$ and $V-R$ LCs and find that the required $^{56}$Ni mass ($\sim 2.66_{-0.15}^{+0.17}\,$M$_\odot$ for Case A or $1.61_{-0.07}^{+0.08}\,$M$_\odot$ for Case B) is larger than the inferred ejecta mass ($\sim 1.39_{-0.33}^{+0.19}\,$M$_\odot$ for Case A or $\sim 1.45_{-0.32}^{+0.17}\,$M$_\odot$ for Case B) if its multiband LCs were solely powered by $^{56}$Ni, indicating that the $^{56}$Ni model cannot account for the LCs of SN~2007D. Alternatively, we employ the magnetar model and find that the LCs can be fitted and the parameters are reasonable if the initial period $P_0$ and the magnetic strength $B_p$ of the putative magnetar are $7.28_{-0.21}^{+0.21}$\,ms (or $9.00_{-0.42}^{+0.32}$\,ms for Case B) and $3.10_{-0.35}^{+0.36} \times 10^{14}$\,G (or $2.81_{-0.44}^{+0.43} \times 10^{14}$\,G for Case B), respectively. By comparing the LCs reproduced by the magnetar model and the magnetar plus $^{56}$Ni model (the mass of $^{56}$Ni is set to be $0.2$\,M$_\odot$), we find that the contribution of $^{56}$Ni was significantly lower than that of the magnetar and can be neglected; it is very difficult to distinguish between the LCs reproduced by these two models. Nevertheless, a moderate amount of $^{56}$Ni is needed since the shock launched from the surface of the proto-magnetar would heat the silicon shell located at the base of the SN ejecta and $\lesssim 0.2\,$M$_\odot$ of $^{56}$Ni would be synthesized. According to these results, we suggest that SN~2007D might be powered by a magnetar or a magnetar plus $\lesssim 0.2\,$M$_\odot$ of $^{56}$Ni. Adopting the SLSN threshold ($-20.5$ mag) given by \citet{Qui2018} and \citet{DeCia2018}, and assuming that the peak magnitudes of $R$ and $V$ LCs of SN~2007D are $-20.65 \pm 0.55$ mag and $< -20.54$ mag (respectively), one can conclude that SN~2007D is a SLSN. If we use the values of \citet{Sch2011} for the foreground extinction, however, the luminosity of SN~2007D would be $\sim 0.48$ mag dimmer, and thus only a luminous SN rather than a SLSN. The spectrum provides additional evidence to discriminate these two possibilities. We find that the extinction-corrected premaximum spectrum of SN~2007D is redder than that of three comparison SLSNe-I (LSQ14bdq, SN~2016aj, and SN~2015bn) at a similar epoch, indicating that the temperature of SN~2007D is lower than that of these objects. This fact favors the possibility that SN~2007D is a luminous SN rather than a SLSN. \acknowledgments We thank the anonymous referee for constructive suggestions that led to improvements in our manuscript. This work is supported by the National Basic Research Program (``973" Program) of China (grant 2014CB845800), the National Key Research and Development Program of China (Grant No. 2017YFA0402600), and the National Natural Science Foundation of China (grant 11573014). S.Q.W. and L.D.L. are supported by the China Scholarship Program to conduct research at U.C. Berkeley and UNLV, respectively. L.J.W. is supported by the National Program on Key Research and Development Project of China (grant 2016YFA0400801). A.V.F.'s supernova group is grateful for financial assistance from the Christopher R. Redlich Fund, the TABASGO Foundation, and the Miller Institute for Basic Research in Science (U.C. Berkeley). This research has made use of the CfA Supernova Archive which has been funded in part by the National Science Foundation through grant AST 0907903, the Weizmann Interactive Supernova Data Repository (WISeREP), and the Transient Name Server.
{ "timestamp": "2019-04-16T02:12:53", "yymm": "1904", "arxiv_id": "1904.06598", "language": "en", "url": "https://arxiv.org/abs/1904.06598" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{R}{ecently}, the traditional power grids underwent an alteration to smart grids which leads to several benefits including enhanced reliability and resilience, higher intelligence and optimized control, decentralized operation, higher operational efficiency, more efficient demand management, better power quality, and fraud detection \cite{fadel2015survey}. The smart grid is envisaged to be the next generation of the traditional grid. In smart grid, the consumers minimize their expenses while providers maximize their revenue, hence, a win-win partnership can be achieved. On the contrary to the traditional grids, there is a bidirectional information flow between suppliers and consumers in smart grids rather than a centralized unidirectional system. This feature enables the supplier to generate the electricity based on the demand; and at the same time, the supplier can define dynamic billing tariff, and regard to these tariffs that are sent to consumer periodically (e.g. every $15$ minutes), each consumer decides whether to decrease or increase its power consumption. Thus, electricity is consumed in a more efficient manner. In the other direction of information flow in smart grids, consumers declare their need for electricity or their power consumption; indeed, consumers send their momentary electricity usage to the suppliers. As a result, unlike traditional grids, in smart grids suppliers provide electricity based on the demands of consumers to avoid wasting power \cite{alabdulatif2017privacy}. In order to provide two-way communication in smart grid, the consumers should be equipped with smart meters by which they can measure their usage and send/receive messages over communication links such as power-line, cable, fibre, or radio. The classic approach for billing is to gather all power consumption information in a center, i.e., the consumers send their electricity usage to a server – which is responsible for gathering data – periodically by means of smart meters and then, the server dispatches the gathered information to a local or central database. The electricity bill for each consumer is calculated based on records of the consumers in the database. Criticism to this scheme is that the privacy is not preserved. As each individual consumer sends its usage, the pattern of its power consumption is apparent to the center; for instance, inhabitant’s personal schedules, habits, religion, and so on \cite{li2014enabling,lu2016privacy}. Another trivial approach for billing is that the supplier sends time-varying tariffs periodically to the consumers and their smart-meters compute the electricity consumption price over a defined period (e.g. one month) based on the received tariffs. Eventually, at the end of each period, every consumer only sends its total billing amount to the supplier. In this case, the privacy of each consumer would be preserved; however, supplier cannot verify consumers' billing reports. Consequently, some malicious consumers would take advantage or distrust smart grid by sending incorrect information on their power usages. In this case, historical analysis of electricity usage reports would not be useful for identifying malicious users. For instance, if the power consumption pattern of a consumer alters over time, this consumer would be considered as a consumer who is declaring incorrect information; while a malicious consumer who ever sends artificial data cannot be easily identified \cite{XiaXLZ2018}. According to the aforementioned scenarios, the main challenge in communications between consumers and suppliers is preserving the privacy of consumers while identifying any malicious consumer in the smart grid. To address this challenge, we propose a new scheme called statistical-based privacy preserving (SBPP). An earlier version of SBPP was presented in \cite{sbpp1}. Notably, the present work completes our previous paper and provides more technical details and adds some new ideas for data gathering and fraud detection. The proposed scheme enables privacy preserving data gathering and detecting the malicious consumers possible at the same time. SBPP exhibits an efficient solution for privacy preserving in terms of computation complexity and communication overhead. The key idea in SBPP scheme is to combine usages of different consumers at local data aggregators (to preserve privacy) and also sending accurate usage of some randomly selected consumers to the supplier. Then the supplier uses the data of accurate usage of different consumers over random periods of time to detect malicious consumers. Simulation results verify that SBPP is reliable for detecting malicious consumers in different sabotaging scenarios. The remainder of this paper is organized as follows: In Section~\ref{sec:relatedworks}, we briefly discuss related works. In Section~\ref{sec:systemmodel}, the system model is introduced. In Section~\ref{sec:proposedscheme}, we describe our proposed statistical-based scheme for data gathering in smart grid. In section~\ref{sec:simulation}, the simulation results of the proposed scheme are presented. Finally, we conclude the paper in Section~\ref{sec:conclusion}. \section{Related Works} \label{sec:relatedworks} Recently, many researchers have paid attention to privacy-preserving solutions for smart grids. In this section, we briefly review some proposed schemes for privacy-preserved data gathering in smart grids. In \cite{wong2014privacy} authors propose an algorithm for data collection with self-awareness protection. The paper considers some data aggregators and consumers in a smart grid where some of the respondents may not participate in contributing their personal data or submit erroneous data. To overcome this issue a self-awareness protocol is proposed to enhance trust of the respondents when sending their personal data to the data aggregators. In this scheme, all consumers collaborate with each other to preserve the privacy. They have hired an idea, which allows respondents to know protection level before the data submission process is initiated. The work is motivated by \cite{domingo2010coprivacy} and \cite{ferrer2011coprivacy}. In \cite{domingo2010coprivacy}, co-privacy (co-operative privacy) is introduced. Co-privacy claims that best solution to achieve privacy is to help other parties to achieve their privacy. Many papers study self-oriented privacy protection methods. For example, \cite{golle2008data} introduces self-enforcing privacy (SEP) for e-polling. In SEP scheme, supplier allows the consumers to track their submitted data in order to protect their privacy. In this case, the consumers can accuse the supplier based on data they gathered during the collection process. Following this idea, a fair approach for accusation is presented in \cite{stegelmann2010towards}. In \cite{kumar2010freedom}, a respondent-defined privacy protection (RDPP) is introduced. It means that respondents are allowed to determine their required privacy protection level before delivering data to data collector. Unlike other methods in which data aggregators decide about the privacy protection level, in this scheme the consumers can freely define the privacy protection level. There are also some other researches on privacy-preserving data collection. For instance, in \cite{wang2016balanced} authors design a balanced anonymity and traceability for outsourcing small-scale linear data aggregation (called BAT-LA) in smart grid. Anonymity means that consumers’ identity should be kept secret and traceability means that imposter consumers should be traced. Here an important challenge is that many devices are not capable of handling required complicated computations. Hence, they have hired the idea of outsourcing computations with the help of public cloud. The paper utilizes elliptic curve cryptography and proxy re-encryption to make BAT-LA secure. BAT-LA is evaluated by comparing it with two other schemes, RVK \cite{wang2015tpp}, and LMO \cite{rottondi2013distributed} and it is shown that BAT-LA is more efficient in terms of confidentiality. The papers \cite{wang2016balanced} and \cite{chun2018privacy} focus on outsourcing to clouds or distributed systems. For encryption process, it is important to use a secure key management scheme. The cryptographic technique ensures that no privacy sensitive information would be revealed. But, there is still the challenge of how to efficiently query encrypted multidimensional metering data stored in an untrusted heterogeneous distributed system environment. \cite{jiang2018achieving} addresses this issue and introduces a high performance and privacy-preserving query (P2Q) scheme which provides confidentiality and privacy in a semi-trusted environment. To obtain privacy of residential consumers, a scheme named APED is proposed in \cite{sun2013aped}. The paper employs a pairwise private stream aggregation. The scheme achieves privacy preserving aggregation and also executes error detection when some nodes fail to function normally. DG-APED is an improved form of APED, suggested in \cite{shi2015diverse}. DG-APED propounds diverse grouping-based protocol with error detection. This research added differential privacy technique to APED. Moreover, DG-APED has an advantage of being efficient in terms of communication and computation overhead compared to APED. In \cite{jia2014human}, the authors present a new kind of attack, which adversary extracts information about the presence or absence of a consumer to access the smart meter information. The attack is called human-factor-aware differential aggregation (HDA) and it is claimed that other proposed schemes cannot handle it. To solve this issue, the authors introduce two privacy-preserving schemes which can stand out against HDA attack by transmitting encrypted measurements to an aggregator in a way that aggregator cannot steal any information of human activities. PDA is a privacy-preserving dual-functional aggregation scheme in which, every consumer disseminates only one data and then supplier computes two statistical averages (mean and variance) of all consumers \cite{li2015pda}. The paper shows by simulations that PDA possesses low computational complexity and communication overheads. In another work, the authors introduce privacy-preserving data aggregation with fault-tolerance called PDAFT \cite{chen2015pdaft}. If PDAFT is implemented, a strong adversary is not able to gain any information, even in the case of compromising a few servers at the supplier. Like PDA, PDAFT has relatively high communication overhead and is tenacious against many security threats. In PDAFT scheme, if some consumers or servers fail, it can still work correctly. DPAFT \cite{bao2015new} is another privacy-preserving data collection scheme which supports both differential privacy and fault tolerance at the same time. It is claimed that, DPAFT surpass other schemes in many aspects, such as storage cost, computation complexity, utility of differential privacy, robustness of fault tolerance, and the efficiency of consumer addition or removal \cite{bao2015new}. A new malfunctioning data aggregation scheme, named MuDA, is introduced in \cite{chen2015muda}. The scheme is resistant to differential attacks and keeps consumers’ information secret with an acceptable noise rate. PDAFT \cite{chen2015muda}, DPAFT \cite{bao2015new}, and MuDA \cite{chen2015muda}, shows nearly same characteristics with differences on their cryptographic methods \cite{ferrag1611survey}. PDAFT employs homomorphic Paillier cryptosystem \cite{paillier1999public}, while DPAFT and MUDA use Boneh-Goh-Nissim cryptosystem \cite{boneh2005evaluating}. In \cite{fan2014privacy} authors present a secure power usage data aggregation method for smart grid where the supplier understands usage of each neighborhood and makes decision about energy distribution, while it has no idea of the individual electricity consumption of each consumer. This method is designed to barricade internal attacks and provide batch verification. Authors of \cite{he2016privacy} found out that \cite{fan2014privacy} has the weakness of key leakage and the imposter can obtain the private key of consumer easily. It is proved that by using the protocol in \cite{he2016privacy}, key leakage problem is solved and a better performance in terms of computational cost is achieved. Neglecting energy cost is the main disadvantage of this method. In \cite{chun2018privacy}, a privacy-preserving protocol for smart grid is presented which outsources computations to cloud servers. In this protocol, the data is encrypted before outsourcing and consequently cloud can perform any computations without decryption. \cite{baloglu2018lightweight} adopts perturbation techniques to preserve privacy and uses perturbation techniques and cryptosystems at the same time. It is designed in a way to be suitable for hardware-limited devices. Evaluations show that it is resilient to two types of attack: filtering attack, and true value attack. Authors of \cite{rial2018privacy} explain how for privacy preserving an individual meter of a consumer can share its readings to multiple consumers, and how a consumer can receive meter readings from multiple meters. They propose a polynomial-based protocol for pricing. In \cite{csimcsek2018tps3} a security protocol called TPS3 is introduced which uses Temporal Perturbation and Shamir’s Secret Sharing (SSS) to guarantees privacy and reliability of consumers’ data. In \cite{liao2017optimal}, data collector tries to preserve privacy by adding some random noise to its computation result. To overcome the problem of computation accuracy reduction, an approximation method is proposed in \cite{liao2017optimal} which leads to obtain a closed form of aggregator’s decision problem. In \cite{xu2018privacy}, a slightly different scenario is considered in which a data aggregator collects data from consumers and then spreads data to supplier. The goal is to preserve consumers’ data privacy. Anonymization might be an answer, but it has its own challenges. To achieve a tradeoff between privacy protection and data utility, interactions among three elements of scenario (consumers, data aggregator, and supplier) is modelled as a game and the Nash equilibria of the game is found. In this paper we use the idea of aggregating data of different consumers for persevering the privacy of each individual consumer. The proposed SBPP scheme applies a simple statistical method for identifying the malicious consumers who send erroneous data to the aggregator in order to take advantage or disrupt smart grid. SBPP does not require any change on consumer side and communication infrastructure. It possesses very low computational overhead in aggregators and billing center in order to preserve privacy and detect malicious consumers. Therefore, SBPP is practical in the sense that it can be easily implemented on the existing infrastructure of smart grids. \section{System Model} \label{sec:systemmodel} In this section, we present our system model. The essential elements of our proposed scheme include: \textit{Consumer:} those who consume energy in a power grid. \textit{Benign consumer:} a consumer who reported its power consumption correctly. \textit{Malicious consumer:} a consumer who reported its power consumption incorrectly due to some purposes such as fraud (power theft) or disruptive goals (power loss). \textit{Supplier:} an entity whose responsibility is to provide energy for power consumers in a region. \textit{Data aggregator:} a local facility whose liability is gathering the amount of power consumption information from consumers periodically and dispatching the gathered data to the supplier. \textit{Electricity leakage}: the difference between the actual amount of consumed energy and the sum of quantity reported by consumers as their power consumption. We consider a power grid consisting of $M$ regions where each region comprises one data aggregator. Denote the number of consumers in the $j$'th region by $n_j$ for $j= 1, \ldots, M$. Assume that the consumers send their power consumption information measured by the smart meters to the local aggregators. The aggregators are responsible of gathering local data and sending some information regarding the usage of the consumers to the power supplier. It is assumed that data aggregators are trusted. Indeed, no information leakage occurs at data aggregators, supposedly because after aggregation takes place, no raw information concerning power consumption of consumers would be at hand. Besides, we assume that communications among the above entities of smart grid are secured. This means that the submitted data by a consumer cannot be altered in the communication infrastructure, i.e., the erroneous data would be only generated by a consumer itself. \section{Proposed Scheme} \label{sec:proposedscheme} Although the accuracy of smart grids' performance is engaged with the correctness of data gathered from consumers, this data gathering should not violate the privacy of consumers. Here we propose a new privacy-preserving data gathering scheme with the purpose of informing the supplier of the instant power consumption. The proposed scheme provides the power usage information for the supplier while keeping the consumers' power consumption information private and more importantly, finds malicious consumers in the process. The proposed scheme possesses very low computational complexity and communication overhead on the smart grid in comparison with the existing methods. \subsection{Data Gathering in SBPP} Here we present Statistical-Based Privacy-Preserving Scheme (SBPP) for data gathering in smart grids. SBPP gathers the information in the following steps: \begin{enumerate} \item [1.] Consumers report their power consumption periodically to the local data aggregator. \item [2.] At each time period, the aggregator computes the total amount of power consumption based on the gathered data from the consumers. It also randomly selects the reported value of one the consumers. \item [3.] The aggregator sends the total power consumption value along with the name and reported value the selected consumer to the supplier. \item [5.] The supplier provides energy based on the reports of the aggregators and stores the reported values for randomly selected consumers. \end{enumerate} Fig.~\ref{datagathering} depicts how data gathering takes place. Here it is assumed that data aggregators are trusted and the power consumption data is not at hand any more after being summed up by the data aggregators. Based on this assumption, instead of having access to power consumption data of each individual consumer at any period of time, a little portion of information is available about the power consumption of each consumer. As an example, suppose there are $100$ consumers in a region with one data aggregator and let the period of data gathering be $15$ minutes. Without this data gathering algorithm mechanism, a consumer should send its power consumption information to the supplier $30*24*60/15=2880$ times in a month. By employing our scheme for data gathering, on average $2880/100=28.8$ values in regards to the power consumption of each consumer is available at the supplier in an analogous period of time. Although it may seem that having access to power consumption information of consumers is in contradiction with their privacy, availability of these information $28$ times a month on random periods would not reveal any data concerning their life style compared with approachability of these information $2880$ times within a month. \begin{figure*}[t!] \begin{center} \includegraphics[width=\textwidth]{Drawing6.jpg} \end{center} \caption{The total power consumption is calculated by aggregators in each region and sent to the supplier. Let $r_{ij}$ be the reported consumed power by consumer $i$ in region $j$ and $k_j$ denotes the index of randomly chosen consumer in region $j$.} \label{datagathering} \end{figure*} \subsection{Detecting Malicious Consumers} Malicious consumers pursue two distinct aims by sending erroneous data to suppliers. Either they declare their amount of power consumption lower than their real consumed power in order to pay less fee; or, they report their power consumption quantity much higher to impose more expenditure to the supplier. In this paper, we get use of Pearson correlation coefficient of power consumption of consumers in order to find malicious consumers in each region who try to send erroneous data to the supplier. Pearson correlation coefficient illustrates the statistical relationship between two variables and it is defined as follows: \begin{equation} corr(x,y) = \, \frac{cov(x,y)}{\sqrt{var(x) \cdot var(y)}} \label{corr} \end{equation} where $corr$ is a widely used alternative notation for the correlation coefficient and $cov$ means covariance. Correlation coefficient possesses values in the range of $[-1,+1]$, where $+1$ and $-1$ indicate the strongest possible agreement and disagreement respectively. In order to find malicious consumers, it is assumed that data aggregators are aware of the total amount of power consumed in each region. By comparing this amount with the aggregated quantity declared by consumers, the electricity leakage value (shortage) can be determined. Having access to merely one quantity of power consumption information corresponding to a consumer does not suffice to distinguish if that consumer is benign or malicious. In other words, the more information we have regarding power consumption of each consumer, the better decision we can make about the sabotage of consumers. Thus, the scheme for finding malicious consumers can take place at the end of month or after a few months. In order to detect malicious consumers, each data aggregator stores the identity (ID) of the randomly selected consumer, its reported power consumption ($r$), and the electricity leakage amount of power consumed in that region at every period ($l$). Then for each consumer, the data aggregator computes the correlation coefficient of its reported consumed energy and the leakage values of power consumption. The leakage quantity is defined as: \begin{equation} l = c - r \label{leakage} \end{equation} where $c$ and $r$ are the actual and reported power consumption values respectively. It is straightforward to see that the correlation coefficient turns to $0$ for benign consumers, since the leakage value is statistically independent from the power consumption of a benign consumer. Now if the correlation coefficient turns to $+1$ for a consumer, this means that (according to (\ref{leakage})) the consumer is reporting its power consumption less than its actual used power. On the other hand, if the correlation coefficient for a consumer turns to $-1$, it means that consumer is declaring its power consumption more than its usage due to some subversive goals. Thus, SBPP scheme is capable of not only detecting malicious consumers, but also comprehending if that consumer is declaring its amount of power consumption less or more than its actual quantity. Next, we study the performance of SPBB for some scenarios based on the number of malicious consumers in each region and the behavior of attackers. \subsubsection{Scenario I: Existence of at most one malicious consumer in each region} \ Suppose that there exists at most one malicious consumer in each region. According to the declared quantity of power consumption of consumers, three cases could be considered: It should be stated that in all of the following formulas, all the variables are zero-mean. Indeed, the mean-value of all variables are subtracted from their real values. Throughout the paper, zero-mean vectors are depicted as $\bar{\textbf{x}}$. \textit{\textbf{Case I:} Malicious consumer reports a portion/multiple of its actual usage:} In this case, the reported quantities by the malicious consumer would undoubtedly have correlation with leakage amounts. Consider an arbitrary consumer, and let vectors ($\textbf{\underline{r}}$) and ($\textbf{\underline{l}}$) be the zero-mean vectors containing reported values of that consumer and the corresponding electricity leakage amounts on those periods. Here, the correlation coefficient for each consumer would be written as: \begin{equation} corr(\bar{\textbf{r}},\bar{\textbf{l}}) = \frac{\bar{\textbf{r}}^T \bar{\textbf{l}}}{\|\bar{\textbf{r}}\| \|\bar{\textbf{l}}\|} \\ \label{malcorr} \end{equation} In this case, the malicious consumer reports a portion/multiple of its consumed energy, i.e., $r = \alpha c$, where $\alpha$ is a positive coefficient, $\alpha > 0$. Thus, by getting use of (\ref{leakage}), for a malicious consumer, the correlation coefficient in (\ref{malcorr}) would be as: \begin{align} corr(\bar{\textbf{r}},\bar{\textbf{l}}) \nonumber & =\frac{\bar{\textbf{r}}^T \bar{\textbf{l}}}{\|\bar{\textbf{r}}\| \|\bar{\textbf{l}}\|} \\ \nonumber & = \frac{\alpha \bar{\textbf{c}}^T (1-\alpha) \bar{\textbf{c}}}{|\alpha| \|\bar{\textbf{c}}\| |1-\alpha| \|\bar{\textbf{c}}\|} \\ & = \frac{\alpha(1-\alpha)\|\bar{\textbf{c}}\|^2}{|\alpha||1-\alpha|\|\bar{\textbf{c}}\|^2} \\ \nonumber & = \frac{\alpha(1-\alpha)}{|\alpha||1-\alpha|} \label{case1} \end{align} As stated before, when the malicious consumer reports its power consumption less than its actual quantity, $0 < \alpha < 1$, the correlation coefficient turns to +1, and on the contrary, the correlation coefficient turns to -1 when the malicious consumer declares its consumed energy more than its actual usage, $\alpha > 1$. \textit{\textbf{Case II:} Malicious consumer adds/subtracts a fixed quantity to/from its actual usage:} In this case, the reported quantity by the malicious consumer is independent from the leakage amount until the actual consumed energy lies below the fixed value ($\eta$) which is added/subtracted to/from the actual power usage. Indeed, as the reported consumed energy cannot be negative, the reported quantity and the corresponding electricity leakage for each period could be written as: \begin{align} & r = \begin{cases} c-\eta & if \; c \geq \eta \\ 0 & if \; c < \eta \end{cases}\\ & l= \begin{cases} \eta & if \; c \geq \eta \\ c & if \; c < \eta \end{cases} \end{align} While the actual consumed energy in each period is greater than the fixed threshold $\eta$, $c \geq \eta$, these terms become independent and thus, the malicious consumer cannot be detected. On the other hand, while the consumed power is less than the threshold, $c \leq \eta$, the malicious consumer would report its consumed energy zero and thus, the reported consumed power would be dependent to the leakage quantity. Consequently, by focusing on the measurements where $r$ is small, we can still detect the malicious consumer. \textit{\textbf{Case III:} Malicious consumer adds/subtracts a random quantity to/from its actual usage:} Assume that the malicious consumer adds/subtracts a random value independent from its power consumption to/from its consumed energy such that none of its reported quantities turns to a non-negative value. In this case, although the declared amounts of power consumption are quite independent from the electricity leakage corresponding to them at each period, the proposed scheme is capable of detecting the malicious consumer as well. \begin{align} corr(\bar{\textbf{r}},\bar{\textbf{l}}) \nonumber & =\frac{\bar{\textbf{r}}^T \bar{\textbf{l}}}{\|\bar{\textbf{r}}\| \|\bar{\textbf{r}}\|} \\ \nonumber & = \frac{(\bar{\textbf{c}} - \bar{\boldsymbol{\theta}})^T \bar{\boldsymbol{\theta}}}{\|\bar{\textbf{c}} - \bar{\boldsymbol{\theta}}\| \|\bar{\boldsymbol{\theta}}\|} \\ & = \frac{\bar{\textbf{c}}^T \bar{\boldsymbol{\theta}}}{\|\bar{\textbf{c}} - \bar{\boldsymbol{\theta}}\| \|\bar{\boldsymbol{\theta}}\|} - \frac{\|\bar{\boldsymbol{\theta}}\|}{\|\bar{\textbf{c}} - \bar{\boldsymbol{\theta}}\|} \label{case3} \end{align} where $\bar{\boldsymbol{\theta}}$ is a vector containing random values ($\theta$s) added/subtracted to/from the reported quantity. While $\bar{\textbf{c}}$ and $\bar{\boldsymbol{\theta}}$ are independent, the first term in (\ref{case3}) would be equal to zero and thus, the correlation coefficient corresponding to the malicious consumer turns to a negative quantity. As the correlation coefficient quantities corresponding to benign consumers revolve around $0$, the malicious consumer should take the smallest negative quantity amongst others. \subsubsection{Scenario II: Existence of more than one malicious consumer in each region} \ Furthermore, it is possible that there are more than one malicious consumer in a region. In this case, although the correlation coefficient corresponding to these consumers would not be equal to \textpm{1}, their correlation coefficient can be still distinguished from other consumers. As a result, it is needed that a threshold ($th$) must be defined where the absolute value of correlation coefficients fewer or more than the threshold indicate benign or malicious consumers respectively, as: \begin{align} \begin{cases} malicious \; consumer, & if \;\;\, -1 \leq corr \leq -th \\ benign \; consumer, & if \; -th < corr < th \\ malicious \; consumer, & if \;\;\;\;\;\, th \leq corr \leq 1 \end{cases} \label{Threshold} \end{align} It is apparent that for higher threshold value, fewer malicious consumers are detected and on the other hand, lower threshold value, more benign consumers are considered as malicious consumers. Thus, a question that arises here is that how a proper threshold be found? The analysis concerning the detection of several malicious consumers with small number of samples for each consumer, is out of scope of this paper, however, we briefly discuss the problem in the next section. In this paper, according to the setting of the problem, we set the threshold to a fixed value, namely, $0.5$. As the proposed scheme is a statistical technique, it is probable that the correlation coefficient of a benign consumer lies out of its defined region depicted in (\ref{Threshold}), or vice versa. \subsection{Billing} Here we describe billing procedure in SBPP scheme. The accountability for billing is handled by data aggregators. As discussed in the last section, malicious consumers can be identified by analyzing the correlation coefficient of each consumer in the region. Malicious consumers' being detected, the sent data corresponding to other consumers are considered trustworthy and error free. Based on this assumption, the liability for billing can be assigned to data aggregators. In every period, consumers send their amount of consumed energy to data aggregators. Then, based on the received data from the consumers and the received tariffs from the supplier, the data aggregators compute the cost of consumed power for each consumer before data aggregation takes place. In each period, data aggregators calculate the cost of consumed power for each consumer and add the cost to the previously calculated cost for that consumer and by the end of month, a bill will be issued and sent to the supplier's accounting center. Note that the task of computing the bills can be assigned to the smart-meters as well. Not only this scheme decreases the signalling overhead, but also the privacy of consumers will be protected. It is merely required that suppliers send tariffs periodically to data aggregators and to consumers simultaneously. Data aggregators compute the cost of consuming energy for every consumer and smart meters on the consumers' side adjust the power consumption based on the received tariffs, i.e., when the tariff increases, smart meters force dispensable devices to be turned off. In this case, no information leakage and thus no privacy invasion would occur. \section{Simulation Results} \label{sec:simulation} In this section, we describe our simulation results for the proposed SBPP scheme. The results verify that SBPP scheme is capable of detecting malicious consumers who send bogus information concerning their power consumption. Although in reality there exists a dependency between power consumption of consumers in every successive periods, in all simulations we consider a random power consumption for each consumer in each period, which is the worst case that could be considered. We show that our proposed method works properly in this case and thus could be applied in real world smart grid. Here, the previously described scenarios are investigated. \subsection{Case I: Malicious consumer multiplies its usage} In the following, all simulations are considered based on this assumption. Consider a region consisting of $100$ consumers and one data aggregator where data aggregation takes place every $15$ minutes. Assume that the consumer $\# 25$ is a malicious consumer. Two cases are studied; consumer $\# 25$ in scenario (i) reports one tenth ($0.1$) of its power consumption and in scenario (ii) it declares its power usage $10$ times more than its actual consumption. Fig.~\ref{Malicious}~(a) illustrates scenario (i) where the correlation coefficient of reported consumed energy and the leakage amounts of power consumption turns to $+1$ and Fig.~\ref{Malicious}~(b) depicts scenario (i) where the correlation coefficient turns to $-1$. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{Malicious2-eps-converted-to} \caption{Correlation coefficient of the reported consumed energy and the leakage amounts of power consumption for all consumers in the grid. (a) The scenario where malicious consumer declare its power consumption less than the actual quantity and (b) the scenario where malicious consumer declare its power consumption more than the actual quantity} \label{Malicious} \end{center} \end{figure} Note that henceforth, all the assumptions are analogous to that of Fig.~\ref{Malicious} where consumer $\# 25$ is the malicious consumer unless mentioned otherwise. Next, we assume that there are three malicious consumers: $\# 25$, $\# 50$, and $\# 75$. Consumers with IDs $\# 25$ and $\# 75$ declare their power consumption less than their actual consumption and consumer $\# 50$ reports its power consumption more than its actual consumed energy. By setting the threshold to $0.5$, consumers with absolute value of correlation coefficient greater than $0.5$, i.e. $|corr|\geq 0.5$, would be considered malicious, as depicted in Fig.~\ref{mixed}. As it can be seen from Fig.~\ref{mixed}, fixed threshold will result in three cases: 1) only malicious consumers been detected (Fig.~\ref{mixed}~(a)), 2) in addition to malicious consumers, some benign consumers found malicious (Fig.~\ref{mixed}~(b)), and 3) a subset of malicious consumers been detected (Fig.~\ref{mixed}~(c)). \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{mixed-eps-converted-to}\\ \includegraphics[width=\linewidth]{mixed1-eps-converted-to}\\ \includegraphics[width=\linewidth]{mixed2-eps-converted-to} \end{center} \caption{Existence of more than one malicious consumer. (a) all malicious consumer are detected correctly, (b) in addition to malicious consumers, a number of benign consumers are found malicious, and (c) not all malicious consumers are detected.} \label{mixed} \end{figure} As the proposed scheme is a statistical method, it is apparent that the more data be at hand, the more accurate the decision would be. Fig.~\ref{12months} illustrates this matter. In Fig.~\ref{12months}~(a), one month is considered as the period of measurement. On the other hand, the period of one year is considered in Fig.~\ref{12months}~(b). In this case, the number of samples has increased $12$ times. As a result, as it can be seen from Fig.~\ref{12months}~(b), the correlation coefficient corresponding to benign consumers revolves more densely around zero and the correlation coefficient corresponding to the malicious consumer (consumer $\# 25$) lies far apart from others compared to Fig.~\ref{12months} (a). \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{12months-eps-converted-to} \caption{The effect of increasing in the number of samples on the detection rate. Fig.~\ref{12months} (a) and Fig.~\ref{12months} (b) illustrate the correlation coefficient of consumers within a month and a year respectively.} \label{12months} \end{center} \end{figure} \subsection{Case II: Malicious consumer adds/subtracts a fixed quantity} Here we assume that the malicious consumers add/subtract the fixed quantity $\eta$ to/from the amount of consumed energy. This is independent from the actual usage of malicious consumer, detection of malicious consumer would be associated with those reported quantities that are equal to zero. This matter will result in the fact that detection of malicious consumers get more harder than the previous case, as illustrated in Fig.~\ref{case2-1}. More importantly, as it can be seen from Fig.~\ref{case2-1}, the correlation coefficient corresponding to the malicious consumer does not turn to $+1$. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{case2-1-eps-converted-to} \caption{The correlation coefficient of the malicious consumer is to $1$.} \label{case2-1} \end{center} \end{figure} It can be seen from Fig.~\ref{case2-1} that the correlation coefficient corresponding to the malicious consumer is far close to that of benign consumers. As stated before, this matter would made detection of the malicious consumer tough. Fig.~\ref{case2_3} illustrates this matter. As depicted in Fig.~\ref{case2_3}, the more samples we have corresponding to power consumption of consumers, the more accurate we are in detection of the malicious consumer. As it can be seen, by increasing the period of measurement, the probability of detecting the malicious consumer approaches to $1$, while for the period of one month this probability revolves around $0.5$. In each iteration, the simulation is repeated $100$ times and the probability of correct detection of malicious consumer is calculated. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{case2_31-eps-converted-to} \caption{The probability of detection of malicious consumer improves as the duration of report analysis grow.} \label{case2_3} \end{center} \end{figure} \subsection{Case III: Malicious consumer adds/subtracts a random quantity} In this section the simulation regarding the third scenario is brought where malicious consumer adds/subtracts a random quantity to its reported consumed energy. As mentioned before, according to (\ref{case3}), the lowest negative quantity for the correlation coefficient expresses the malicious consumer, as depicted in Fig. \ref{case3_1}. Note that assumptions are similar to that of Fig.~\ref{Malicious} where consumer $\# 25$ is the malicious consumer unless mentioned otherwise. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{case3_1-eps-converted-to} \caption{The correlation coefficient corresponding to benign and malicious consumers. The malicious consumer has the lowest correlation coefficient.} \label{case3_1} \end{center} \end{figure} The performance of our SBPP scheme in the three mentioned cases is evaluated in Table~\ref{Tbl.2}. Here, the simulation is repeated $1000$ times and the number of correct detections had been counted and finally the probability of correct detection had been calculated. \begin{table} [t] \centering \caption{Probability of Correct Detection} \label{Tbl.2} \begin{tabular}{lcccc} \cline{2-5} & \multicolumn{4}{c}{Period of measurement}\\ & 1 month & 3 months & 6 months & 1 year \\ \hline Case I & 1 & 1 & 1 & 1 \\ Case II & 0.51 & 0.98 & 1 & 1 \\ Case III & 0.92 & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} We presented a statistical-based approach for data gathering in smart grids which preserves the privacy of consumers in the grid. We investigated the capability of the proposed scheme in detecting malicious consumers who dispatch bogus data to suppliers for a specific purpose such as abating their cost or imposing expenditure on them (subversive goals). Furthermore, we showed that when there exists at most one malicious consumer in each data gathering region, that consumer can be definitely detected. We consider three distinct cases for sabotage goals and show that our proposed method works properly in all cases. However, when the number of malicious consumers in a region grow, our statistical method would detect some benign consumers as malicious, or some malicious consumers remain undetected. We then presented an algorithm for billing which concede the liability of billing to data aggregators in each region. By doing this, not only the signalling overhead decreases significantly, but also billing occurs at a trusted entity where malicious consumers are distinguished from benign ones. Our simulation results verified these terms.
{ "timestamp": "2019-05-16T02:18:59", "yymm": "1904", "arxiv_id": "1904.06576", "language": "en", "url": "https://arxiv.org/abs/1904.06576" }
\section{Introduction} \label{sec:intro} The introduction of decentralized blockchains, initially conceived as a means for cash payments without a trusted intermediary in the form of Bitcoin~\cite{bitcoin}, has sparked a flurry of interest in developing decentralized applications for a wide variety of areas. Smart contracts~\cite{szabo1997}, programs whose consistent global execution is enforced by a consensus protocol among a decentralized network of nodes rather than by a single server, have in recent years gained traction as a means of dis-intermediating a wide assortment of non-financial tasks. Unfortunately, the base layers of fully decentralized blockchain systems, as deployed presently, are extremely limited in their transactional throughput. The Bitcoin blockchain currently processes an average of $\sim$3 transactions per second~\cite{scaling} and is operating at maximum capacity, while the Ethereum blockchain is currently capped at $\sim$15 transactions per second and is often operating at its maximum capacity as well. In contrast, global payment processors handle on the order of tens of thousands of transactions per second~\cite{scaling}. This work presents a study on numerous scaling methodologies devised over the past decade and analyzes their features and challenges. A novel scaling direction that is mostly composed of well-known and studied components is then introduced~\footnote{An earlier version of this work introduced the high-level ideas that were later refined into a minimal viable spec~\cite{minimal_viable_merged_consensus} of what is now known as ``optimistic rollup~\cite{optimistic_rollup_pg,fuel_labs}.'' The present version collects the minimal spec and subsequent improvements into a single cohesive document. A layperson's edition of this work is also available~\cite{the_whys_of_oru}.}. A side chain construction is used to avoid any mainnet protocol changes, which would require coordinating a fork with client developers, users, application developers, and exchanges, while allowing innovations and improvements to be deployed~\cite{sidechains,forks}. Merged consensus is used to progress the side chain, which borrows security from the parent chain using a commit chain scheme~\cite{nocust}. Finally, only a bare minimum of functionality is enabled on the side chain, allowing only financial transactions and trust-minimized movement of funds between the side chain and its parent chain. Section~\ref{sec:prelim} presents fundamental technical preliminaries. Section~\ref{sec:method} describes our proposal for scaling using side chains with merged consensus. Finally, section~\ref{sec:related} gives an overview of and contrasts previous scaling proposals. \section{Preliminaries} \label{sec:prelim} This section gives a high-level overview of the fundamental techniques that will be built upon for our proposed scaling solution. It also provides more precise definitions for various terms whose use is widespread but are often poorly, incorrectly, or incompletely defined. \subsection{Decentralized Blockchains} \label{sec:prelim:blockchain} At its core, a \textit{blockchain} is nothing more than a database consisting of a cryptographic-hash-linked chain of blocks that defines a total ordering of transactions and is deterministically verifiable. A blockchain is deterministically verifiable if its correctness can be determined using only data contained within itself (\textit{i.e.}, it is self-consistent), and is accomplished through the use of cryptographic hashing to link blocks together and digital signatures for each transaction. Execution engines can be built on top of this ordered (\textit{i.e.}, serialized) data, such as the Ethereum Virtual Machine (EVM) in Ethereum~\cite{ethereum}. Pictured in Figure~\ref{fig:blockchain} is a high-level representation of the blockchain data structure, with each block (square) containing a hash of the previous block in the chain (arrow). \begin{figure}[!h] \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (b1) [draw opacity=0] {\dots}; \node (b2) [right=of b1] {$B_{i-2}$}; \node (b3) [right=of b2] {$B_{i-1}$}; \node (b4) [right=of b3] {$B_{i}$}; \path[->] (b2) edge (b1) (b3) edge (b2) (b4) edge (b3) ; \end{tikzpicture} \caption{Blockchain structure.} \label{fig:blockchain} \end{figure} \textit{Decentralized} blockchains are of particular interest for use in systems with open participation that must be publicly auditable, such as for a dis-intermediated payments system. \begin{definition} A blockchain system is \textit{distributed} if it is replicated across \textbf{more than one} physical computer (a system participant, \textit{i.e.}, a node). \end{definition} \begin{definition} \label{def:permissionless} A blockchain system is \textit{permissionless} if it can be read from and written to without requiring permission from \textbf{existing} system participants. In practical terms, this property requires users to be able to participate as block producers of the system without first being part of the system. \end{definition} Colloquially, a blockchain system is \textit{trustless} if it does not require trusting \textbf{any} external resources to interact with it (including, but not limited to, a third-party computer, an escrow service, a trusted notary, and binary executables with no source code or non-deterministic builds). This description of ``trustless'' is circular however, as ``trust'' is not defined; to that end, let us examine presently a more rigorous definition of the word. State elements in a blockchain system come in two variants: owned (associated with a public address), or unowned (not associated with any address \textit{i.e.}, unused). Delegation of ownership is possible, so the ``owner'' of a state element can be considered both the owner of the private key associated with the address of the owned state element and any potential delegated owners that can be granted varying permissions over the state element. \begin{definition} An owned state element is \textit{live} if it can be modified by its owner (or by its delegated owners within their permissions) in finite time. An unowned state element is live if it can become owned in finite time. \end{definition} \begin{definition} An owned state element is \textit{safe} if it can never be modified by any of its non-owners (or by its delegated owners outside their permissions). Unowned state elements are trivially safe. \end{definition} We can use these to form a concrete, and more importantly non-circular, definition of ``trustlessness'': \begin{definition} \label{def:trustless_concrete} A blockchain system is \textit{trustless} if and only if its state is (\textit{i.e.}, all its state elements are) both live and safe. \end{definition} \noindent We now compose the previous definitions to arrive at a useable and useful definition of the term ``decentralized.'' \begin{definition} \label{def:decentralized} A blockchain system is \textit{decentralized} if and only if it is 1) distributed and 2) trustless and 3) permissionless. \end{definition} Definition~\ref{def:trustless_concrete} is useful when evaluating layer-2 scaling techniques. Unlike layer-1 blockchain systems, which are physical systems, layer-2 constructions that anchor onto parent chains for security and other guarantees can be thought of as logical abstractions, for which the notion of trust in physical machines or persons isn't useful. It is trivial to see that a blockchain system that is trustless by Definition~\ref{def:trustless_concrete} under no assumptions will violate the FLP impossibility~\cite{flp}. Indeed, even layer-1 blockchain systems are only trustless under a majority block producer assumption: in the case of PoW blockchains, a majority of miners can censor transactions indefinitely, violating state liveness. Therefore our goal is to minimize the assumptions needed to make such systems trustless. \subsection{Consensus Protocols} \label{sec:prelim:consensus} Permissionless blockchains (see \S~\ref{sec:prelim:blockchain} for definitions), require a consensus protocol for writing blocks to the database~\cite{consensus}. Given their permissionless nature, they require some form of Sybil resistance mechanism (\textit{i.e.}, impersonating multiple users should not grant more power in the protocol). Nakamoto Consensus, introduced for use in Bitcoin~\cite{bitcoin}, is the first consensus protocol that performs in a permissionless setting, and leverages Proof-of-Work (PoW) as its Sybil resistance mechanism. A cryptographic hash function can be used, modeled as a random oracle, to determine a block producer~\cite{backbone}, with each participant having a chance of being a block producer proportional to the computational power they devote to the protocol. The \textit{longest chain} (or, more precisely, heaviest chain) of valid blocks, each with sufficient proofs of work, is considered the canonical chain---this is the fork choice rule generally used by the family of consensus protocols based on Nakamoto Consensus. The primary function of these consensus protocols is to provide \textit{security} to the chain. \begin{definition} The \textit{security} of a blockchain system is the cost of changing its history (\textit{i.e.}, rewriting blocks through a chain re-organization). \end{definition} \noindent This concept of blockchain security is distinct from \textit{e.g.}, cryptographic security or smart contract security. Concerns over the shortcomings of Nakamoto Consensus-style protocols (lack of strong finality guarantees, shown in Figures~\ref{fig:reorg_begin} and~\ref{fig:reorg_complete}, where a previously-shorter chain overtakes the previously-longest chain and becomes the new canonical chain) along with the continued use of PoW (enormous energy waste) have led to the search of new consensus protocols that employ stake-based Sybil-resistance, known as Proof-of-Stake (PoS)~\cite{ethereum,ouroboros,avalanche2}. \begin{figure}[!h] \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (i) [draw opacity=0] {\dots}; \node (a1) [right=of i] {$B_{i}$}; \node (b1) [above right=0.3cm and 1cm of a1, fill=lightgray] {$B'_{i+1}$}; \node (c1) [below right=0.3cm and 1cm of a1] {$B_{i+1}$}; \node (c2) [right=of c1] {$B_{i+2}$}; \path[->] (a1) edge (i) (b1) edge (a1) (c1) edge (a1) (c2) edge (c1) ; \end{tikzpicture} \caption{Blockchain reorganization begins. Previously-shorter chain is marked in gray.} \label{fig:reorg_begin} \end{figure} \begin{figure}[!h] \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (i) [draw opacity=0] {\dots}; \node (a1) [right=of i] {$B_{i}$}; \node (b1) [above right=0.3cm and 1cm of a1, fill=lightgray] {$B'_{i+1}$}; \node (b2) [right=of b1, fill=lightgray] {$B'_{i+2}$}; \node (b3) [right=of b2, fill=lightgray] {$B'_{i+3}$}; \node (c1) [below right=0.3cm and 1cm of a1] {$B_{i+1}$}; \node (c2) [right=of c1] {$B_{i+2}$}; \path[->] (a1) edge (i) (b1) edge (a1) (b2) edge (b1) (b3) edge (b2) (c1) edge (a1) (c2) edge (c1) ; \end{tikzpicture} \caption{Blockchain reorganization complete. Previously-shorter chain is now longer.} \label{fig:reorg_complete} \end{figure} The replication of the many desirable properties of Nakamoto Consensus, and the few undesirable properties, with stake-based consensus protocols has been unsuccessful, however. No blockchain system employing a stake-based Sybil-resistant permissionless consensus protocol with satisfactory properties has been deployed in practice as of this writing~\cite{longest_chain_pos}, and to the best of our knowledge, despite many claims to the contrary no such system has been devised to date. In addition, purely stake-based consensus protocols are not decentralized by Definition~\ref{def:decentralized}---specifically Definition~\ref{def:permissionless}---as 1) there is no known way to fairly distribute stake initially and 2) participation in the system requires coins or tokens to be purchased from system participants. Both of these problems are solved by Nakamoto Consensus' use of PoW. \subsection{The Scaling Problem} \label{sec:prelim:scalingproblem} Limitations in transactional throughput for public blockchains, colloquially known as ``The Scaling Problem,'' present a significant roadblock to real-world adoption of such systems. The root cause of the scaling bottleneck is that every block in a decentralized blockchain network must be fully validated by every node (client) on the network. Transaction throughput can be increased trivially by sacrificing security or decentralization, so the true challenge lies in designing a system that is scalable, secure, and decentralized. Almost universally, scaling proposals aim to not have every node validate every block, but rather have a subset of nodes validate a subset of (relevant, in some way) transactions. Layer-1 scaling proposals aim to increase the transaction throughput of the base chain, and generally employ \textit{sharding}~\cite{scaling}, splitting up transactions and state into individual shards instead of collecting them all into a single logical chain. In this model, transaction throughput is increased proportionally to the number of shards (minus overhead for managing shards). It should be noted that PoW-based sharding is undesirable, as security would be split between shards. Layer-2 scaling proposals~\cite{sidechains,state_channels,counterfactual,plasma,lighting_network,arbitrum} aim to move groups of transactions off-chain (or, more precisely, away from the parent chain, \textit{e.g.}, Bitcoin or Ethereum, and onto a second layer network). Transactions can be grouped by type, or application. For example, micropayments can be done through a payment channel network~\cite{lighting_network}, or transactions specific to a single application can be processed through their own chain~\cite{plasma}. We shall see later that the scaling solution proposed in this work is also centered around the idea of not having every node on the network validate every transaction, namely by separating execution (validation) from data ordering availability. \subsection{Improvements to Block Propagation} \label{sec:prelim:propagation} As mentioned in \S~\ref{sec:prelim:scalingproblem}, the scaling bottleneck is due to every node fully validating every block. A prerequisite to validating a new block when one is produced by the network is downloading it in its entirety. To this end, several techniques have been proposed~\cite{bip152,graphene,minisketch}, based largely on set reconciliation using bloom filters~\cite{bloom}, invertible bloom lookup tables~\cite{iblt}, and sketches~\cite{sketches}, to allow blocks to be constructed locally based on an extremely compressed representation of a block's included transactions. Using these, block-producing nodes can spread out their network bandwidth requirements over the entire length of the blocktime, downloading transactions as they are propagated through the network only once. \subsection{Fraud Proofs and Data Availability} \label{sec:prelim:fraudproofs} A model for generalized fraud proofs introduced in~\cite{fraud_proofs} allows for trust-minimized light clients. Non-fully-validating nodes (known as light nodes, or light clients), only check block headers for validity---in a PoW blockchain, that valid and sufficient proof of work was done. The contents of blocks must be assumed to be too expensive for a light client to ever download and validate for even a single block. The proposed fraud proof scheme modifies the transaction Merkle tree to add intermediate state root commitments into it. A fraud proof can then consist of a parametrizable number of Merkle branches against the initial (possibly intermediate) pre-state from which to begin applying transactions, and comparing the resultant state root with the committed post-state root. In addition to fraud proofs,~\cite{fraud_proofs} proposes to use erasure codes~\cite{erasure_codes} for data availability proofs, which are needed as a fraud proof can't be generated for an unavailable block. These data availability proofs involve erasure coding each block, with clients randomly sampling a fixed number of samples. A fraud proof and associated synchrony assumption is needed in case the erasure coding was performed incorrectly. Using a 2D erasure coding scheme, fraud proofs are at most $O(\sqrt{n})$ cost (where $n$ is the blocksize). A later work~\cite{coded_merkle_tree} proposed an order-optimal variant of this scheme, with $O(\log{n})$ fraud proof cost. The existence of compact fraud proofs and data availability proofs allow light clients to operate with reduced trust assumptions. Whereas without these proofs light clients required trust in a majority of block producers being honest, with these proofs light clients only require trust that a single honest node exists in the network that is capable of relaying proofs to them. In practice one will find that this trust assumption is not objectively stronger than the trust the vast majority of users place on, for example, the hardware manufacturer of their CPU, or the implementation and design of cryptographic hash functions without backdoors. \section{Scaling Decentralized Blockchains} \label{sec:method} This section discusses in-depth our proposed scaling solution of a side chain with merged consensus for financial transactions. The proposed construction is capable of handling a large number of transactions per second with virtually identical security and decentralization as its parent chain. Comparisons to other scaling proposals are also discussed. Without loss of generality, we will assume the parent chain of this system is Ethereum~\cite{ethereum} and use associated vocabulary. Any chain with sufficient expressibility for smart contracts and statefulness will suffice. \subsection{A Side Chain for Financial Transactions} \label{sec:method:sidechain} Despite suggestions to parallelize validation of blocks in Ethereum, the bottleneck of client software is in practice disk I/O bandwidth~\cite{eip648}. A combination of poor design of the EVM's opcodes, complex expressivity, and use of an inefficient state trie data structure make it challenging to develop both efficient software and potentially hardware to validate transactions. This work proposes a side chain construction with just enough expressivity for performing financial transactions and trust-minimized movement of funds between the side chain and its parent chain (with optional stateless predicate scripting functionality to support a subset of state channel~\cite{state_channels} and other constructions). Rather than the accounts data model of the EVM, a UTXO data model is used, as the latter is simpler to reason about and optimize parallel implementations for in practice. The side chain's consensus protocol is \textit{merged consensus}, a permissionless consensus protocol that runs entirely on-chain. This is discussed in more detail in Section~\ref{sec:method:merged_consensus}. Security is borrowed from the main chain by timestamping side chain blocks, which prevents history-rewriting attacks that do not also affect the main chain. Thanks to the existence of general-purpose compact fraud proofs and data availability proofs (\S~\ref{sec:prelim:fraudproofs}), the number of transactions included per block can be increased to an arbitrarily large size bound only by physical limitations of block-producing (\textit{i.e.}, mining) nodes to transmit large quantities of data with low computational complexity, while the overall system still remains decentralized. As with all other scalability proposals, increased transaction throughput is achieved by not having every node in the network validate every transaction: with this scheme, main chain block producers are only required to order the side chain's data, while the side chain is validated by its participants. It should be noted that while the design presented in this section is most suited for building a scalable decentralized payment system, it can be trivially extended to support a general-purpose stateful smart contract execution platform (albeit with only part of the performance gains)~\cite{minimal_viable_merged_consensus}. \subsection{Side Chain Design} \label{sec:method:design} This section gives a bird's-eye view of the proposed side chain design, which is then analyzed in subsequent sections. A contract is deployed onto the parent chain that will keep track of side chain block headers, deposits and withdrawals, and process fraud proofs. Using a leader selection protocol that runs entirely on-chain (discussed in \S~\ref{sec:method:merged_consensus}), the leader can submit a side chain block to the contract, along with a bond of parametrizable size. This block \textit{must} extend the tip of the side chain known to the contract, otherwise it is immediately rejected. The side chain block is posted in its entirety in the transaction's data field \textit{e.g.}, \texttt{calldata} for Ethereum~\cite{roll_up}. The contract will authenticate (\textit{i.e.}, Merklelize) the side chain's transactions, and either compare this against the posted transactions root, or compute the actual block header hash (an implementation detail). Finally, the block header hash is saved by the contract for later use---essentially, the contract runs a light client of the side chain. After a parametrizable finalization delay, an unchallenged side chain block can be \textit{finalized} (\textit{i.e.}, becomes irreversible). If a non-finalized side chain block includes an invalid state transition, anyone may submit a fraud proof~\cite{fraud_proofs} on-chain which, if valid, rolls back the tip of the side chain to the previous block and rewards half the bonds of orphaned side chain blocks to the prover (the other half is effectively or explicitly burned). Side chain block producers and incentivized to fully validate the chain, lest they extend an invalid block and have their bond burned. Deposits are trivial in this system: users can simply send funds to the contract then spend them on the side chain immediately. Withdrawals can be accomplished by first burning funds on the side chain, then posting a non-interactive withdrawal request on-chain with an inclusion proof of this burn against a finalized side chain block. Transaction latency can be as fast as the parent chain's block times: with client-side validation of the fully-available data, users (and side chain block producers) can convince themselves that a block is valid without having to wait until it is finalized. As otherwise valid blocks that build upon a valid history are guaranteed to eventually finalized by construction, there is no additional latency introduced. \subsection{Merged Consensus} \label{sec:method:merged_consensus} \textit{Merged consensus} is a consensus protocol that is fully verifiable on-chain. While the name bears similarity to merged mining (\S~\ref{sec:related:mergedmining}), merged consensus provides us with vastly different properties. Recall that decentralize consensus protocols give a blockchain security (\S~\ref{sec:prelim:consensus}). These protocols usually consist of a number of distinct components: \begin{enumerate} \item A fork choice rule: how to choose between two otherwise valid chain. \item A block validity function: completely defines the state transition. \item A leader selection algorithm: how to determine who gets to progress the chain by extending the tip with a new block. (Some decentralized consensus protocols are leaderless~\cite{avalanche2}.) \item A Sybil resistance mechanism: such as Proof-of-Work or Proof-of-Stake. \end{enumerate} The proposed side chain scheme is \textit{fork-free} by construction---as new blocks are enforced to only be able to extend the single tip---so only a trivial fork choice rule is needed. The block validity function is a simple UTXO data model with optional stateless predicate scripting for scalable payments, and potentially any arbitrary computational model if that is desired. This leaves leader selection and Sybil resistance. We propose the simplest possible leader selection algorithm: ``first come, first served''~\cite{minimal_viable_merged_consensus}. The first transaction that gets included on the parent chain that successfully extends the tip is the post-facto leader. Sybil resistance is provided by parent chain transaction fees and inherent rate-limiting. Normal operation of the side chain and parent chain is shown in Figure~\ref{fig:sc_merged_consensus}; note that unlike in merged mining (Figure~\ref{fig:merged_mining}) side chain blocks cannot be produced without a linked parent chain block. This extremely simple proposal works because \textit{the parent chain already provides security} in the form of a timestamping server for non-repudiation and non-equivocation~\cite{catena}. This is known as the commit chain paradigm. \begin{definition} A \textit{commit chain}~\cite{nocust} is a side chain that borrows security from its parent chain through periodic commitment of block hashes (\textit{i.e.}, including a side chain block hash into the parent chain as a state transition). \end{definition} If a more orderly leader selection algorithm is desired (say, for easier block propagation), anything that runs entirely on-chain can be used, such as randomly shuffling staked validators using an on-chain random number generator~\cite{randao}. Staking in this manner does not need a separate token, as the whole purpose of a native coin as originally envisioned by Nakamoto was to provide a disincentive against a majority-hashrate history rewrite~\cite{bitcoin}---a non-issue with merged consensus as the parent chain provides security. \begin{figure}[!h] \captionsetup{justification=centering} \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (idots) [draw opacity=0] {\dots}; \node (sdots) [draw opacity=0, above=0.4cm of idots] {\dots}; \node (i) [right=of idots] {$B_{i}$}; \node (s) [right=of sdots] {$S_{j}$}; \node (a1) [right=of i] {$B_{i+1}$}; \node (a2) [right=of a1] {$B_{i+2}$}; \node (a3) [right=of a2] {$B_{i+3}$}; \node (a4) [right=of a3] {$B_{i+4}$}; \node (a5) [right=of a4] {$B_{i+5}$}; \node (a6) [right=of a5] {$B_{i+6}$}; \node (s1) [draw opacity=0, right=of s] {}; \node (s2) [right=of s1] {$S_{j+1}$}; \node (s3) [right=of s2] {$S_{j+2}$}; \node (s4) [draw opacity=0, right=of s3] {}; \node (s5) [right=of s4] {$S_{j+3}$}; \node (s6) [right=of s5] {$S_{j+4}$}; \path[->] (i) edge (idots) (s) edge (sdots) (i) edge (s) (a1) edge (i) (a2) edge (a1) (a2) edge (s2) (a3) edge (a2) (a3) edge (s3) (a4) edge (a3) (a5) edge (a4) (a5) edge (s5) (a6) edge (a5) (a6) edge (s6) ; \path[->] (s2) edge (s) (s3) edge (s2) (s5) edge (s3) (s6) edge (s5) ; \end{tikzpicture} \caption{Example normal operation of side chain with merged consensus.} \label{fig:sc_merged_consensus} \end{figure} This scheme allows for powerful organic reorganizations to occur, as shown in Figures~\ref{fig:sc_reorg1} and~\ref{fig:sc_reorg2}. For illustrative purposes, suppose there are two miners, Alice (blocks superscripted with $A$) and Bob (blocks superscripted with $B$). In Figure~\ref{fig:sc_reorg1}, Alice and Bob each mine on top of the longest parent chain they are aware of; in this case, there is a tie, so they are each mining on a different fork. By luck, Alice finds her block first and broadcasts it to the network. Bob, seeing this new block, then begins mining on top of this new longest chain as shown in Figure~\ref{fig:sc_reorg2}. This scheme supports organic short reorganizations, as each fork is internally consistent and locally canonical to the miner working on it. This also allows the side chain to seamlessly support persistent chain split scenarios, \textit{e.g.}, in the case of a contentious hard fork. \begin{figure}[!h] \captionsetup{justification=centering} \centering \begin{subfigure}[b]{0.4\textwidth} \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (idots) [draw opacity=0] {\dots}; \node (sdots) [draw opacity=0, above=0.4cm of idots] {\dots}; \node (i) [right=0.5cm of idots, fill=red!50] {$B_{i}$}; \node (s) [right=0.5cm of sdots, fill=red!50] {$S_{j}$}; \node (a1) [above right=0.5cm and 0.5cm of i, fill=red!50] {$B^{A}_{i+1}$}; \node (a2) [right=0.5cm of a1, fill=red!50] {$B^{A}_{i+2}$}; \node (s1) [above=0.4cm of a1, fill=red!50] {$S^{A}_{j+1}$}; \node (s2) [right=0.5cm of s1, fill=red!50] {$S^{A}_{j+2}$}; \node (b1) [below right=0.5cm and 0.5cm of i] {$B^{B}_{i+1}$}; \node (b2) [right=0.5cm of b1] {$B^{B}_{i+2}$}; \node (t1) [above=0.4cm of b1] {$S^{B}_{j+1}$}; \node (t2) [right=0.5cm of t1] {$S^{B}_{j+2}$}; \path[->] (i) edge (idots) (s) edge (sdots) (i) edge (s) (a1) edge (i) (a1) edge (s1) (a2) edge (a1) (a2) edge (s2) (b1) edge (i) (b1) edge (t1) (b2) edge (b1) (b2) edge (t2) ; \path[->] (s1) edge (s) (s2) edge (s1) (t1) edge (s) (t2) edge (t1) ; \end{tikzpicture} \caption{Alice's view of the chain, in red.} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (idots) [draw opacity=0] {\dots}; \node (sdots) [draw opacity=0, above=0.4cm of idots] {\dots}; \node (i) [right=0.5cm of idots, fill=blue!50] {$B_{i}$}; \node (s) [right=0.5cm of sdots, fill=blue!50] {$S_{j}$}; \node (a1) [above right=0.5cm and 0.5cm of i] {$B^{A}_{i+1}$}; \node (a2) [right=0.5cm of a1] {$B^{A}_{i+2}$}; \node (s1) [above=0.4cm of a1] {$S^{A}_{j+1}$}; \node (s2) [right=0.5cm of s1] {$S^{A}_{j+2}$}; \node (b1) [below right=0.5cm and 0.5cm of i, fill=blue!50] {$B^{B}_{i+1}$}; \node (b2) [right=0.5cm of b1, fill=blue!50] {$B^{B}_{i+2}$}; \node (t1) [above=0.4cm of b1, fill=blue!50] {$S^{B}_{j+1}$}; \node (t2) [right=0.5cm of t1, fill=blue!50] {$S^{B}_{j+2}$}; \path[->] (i) edge (idots) (s) edge (sdots) (i) edge (s) (a1) edge (i) (a1) edge (s1) (a2) edge (a1) (a2) edge (s2) (b1) edge (i) (b1) edge (t1) (b2) edge (b1) (b2) edge (t2) ; \path[->] (s1) edge (s) (s2) edge (s1) (t1) edge (s) (t2) edge (t1) ; \end{tikzpicture} \caption{Bob's view of the chain, in blue.} \end{subfigure} \caption{Two miners, Alice $A$ and Bob $B$, mining on different heads of equal height.} \label{fig:sc_reorg1} \end{figure} \begin{figure}[!h] \captionsetup{justification=centering} \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (idots) [draw opacity=0] {\dots}; \node (sdots) [draw opacity=0, above=0.4cm of idots] {\dots}; \node (i) [right=of idots] {$B_{i}$}; \node (s) [right=of sdots] {$S_{j}$}; \node (a1) [right=of i] {$B^{A}_{i+1}$}; \node (a2) [right=of a1] {$B^{A}_{i+2}$}; \node (a3) [right=of a2] {$B^{A}_{i+3}$}; \node (s1) [right=of s] {$S^{A}_{j+1}$}; \node (s2) [right=of s1] {$S^{A}_{j+2}$}; \node (s3) [right=of s2] {$S^{A}_{j+3}$}; \path[->] (i) edge (idots) (s) edge (sdots) (i) edge (s) (a1) edge (i) (a1) edge (s1) (a2) edge (a1) (a2) edge (s2) (a3) edge (a2) (a3) edge (s3) ; \path[->] (s1) edge (s) (s2) edge (s1) (s3) edge (s2) ; \end{tikzpicture} \caption{Alice finds the next parent chain block, so Bob reorganizes his local chain to follow the longest chain he knows of. Both Alice and Bob now have a consistent view of the chain.} \label{fig:sc_reorg2} \end{figure} \subsection{Security Analysis} \label{sec:method:security_analysis} The two-way peg provided by the proposed side chain scheme is trust-minimized. We assume adversaries that can are weaker than a majority block producer censorship attack on the parent chain. As our proposal relies on fraud proofs, this is an optimal adversarial model. As merged consensus is permissionless and performs leader selection entirely on-chain, a side chain with merged consensus is no less available than its parent chain. The proposed side chain scheme is deterministic, objective, and uses no off-chain information, so it is no less consistent than its parent chain. Under the given adversarial model, anyone may become a side chain block producer, even if all other side chain block producers are censoring their transactions, so all state elements can be consumed subject to a finite delay \textit{i.e.}, they are live. Anyone may fully validate the side chain client-side, as all data is available, and submit a fraud proof. Unless the parent chain is being censored for the duration of the side chain block finalization delay (which is outside our adversarial model), such fraud proofs can be submitted, therefore state elements are safe. The proposed scheme is therefore optimally trustless modulo an assumption on the small amount of block space needed to post a fraud proof. This is an even stronger guarantee than what is provided by \textit{e.g.}, Plasma chain (\S~\ref{sec:related:plasmachains}) or channels (\S~\ref{sec:related:channels}), which are vulnerable to chain congestion as well and require lots of cheap block space. \subsection{Further Improvements} \label{sec:method:extensions} The proposed side chain scheme can be further improved for both client-side performance and parent chain optimization. \subsubsection{Parallelizable Data Authentication and Availability} The primary scaling avenue for the proposed trust-minimized side chain design is the separation of consensus on execution from data for parent chain full nodes. The authentication \textit{i.e.}, Merkleization, and broadcasting (transmission) of data is a stateless process however. Transactions can be modified to flag an invariant: that certain parts of the transaction data are not be be processed by---or even accessible to---the chain's virtual machine environment, but are instead pre-processed by specified pure functions (such as, but not limited to, Merkleization), with only the results accessible to the execution environment~\cite{multi_threaded_data_availability}. \subsubsection{UTXO-specific Fraud Proofs Without Intermediate State Serialization} The general-purpose fraud proofs scheme in~\cite{fraud_proofs} can be used for \textit{any} computational model (and so can the side chain scheme presented in this work), but is not the most effective for client-side validation in all cases. It involves serializing the state after every few (or every) transaction in order to compute a new intermediate state root; this is a very expensive process that is also a single-threaded bottleneck. For the UTXO data model specifically, which is sufficient for a decentralized payment system, block producers can attach metadata to each input that commits to a claim on the exact output that it is spending. If this claim is invalid, it can be proven with a non-interactive fraud proof. This scheme does not require any intermediate state serialization, and in fact no state serialization at all~\cite{bip141,compact_utxo_fraud_proofs}. \subsubsection{The State-lookupless Client Paradigm} State accesses are the primary bottleneck for blockchains with stateful smart contracts, \textit{e.g.}, Ethereum. The stateless client paradigm~\cite{utreexo} attempts to remove this bottleneck by instead having transactions include a \textit{witness} to the pre-state of the transaction along with the post-state elements, with full nodes only needing to store a logarithmic- or fixed-sized accumulator. The issue with this approach is twofold: 1) light clients now need to rely on service providers to a greater extent, as they cannot craft a complete transaction without knowing the state and 2) witnesses are immediately outdated and so must be kept up to date, potentially increasing computation or network bandwidth requirements. The state-lookupless client paradigm~\cite{state_lookupless} also allows transactions to provide a witness against a dynamic accumulator~\cite{smt} of the state. Full nodes then also check this witness against some of the most recent blocks (a system parameter); if the witness is too old the transaction is invalid. Since the state transitions of each transaction is uniquely and totally defined in the UTXO data model, the process of validating a stateless transaction, then ensuring it is not spending a spent output in subsequent blocks, is an entirely stateless process. Finally, if valid and included in a block, the transaction's state transition is applied to the state, which must be kept by all full nodes. \subsubsection{On-Chain Data Availability Proofs} Rather than posting all data on-chain all the time, the data availability scheme of ~\cite{fraud_proofs} can be used. However, as it relies on client-side randomness and consensus support, it cannot be implemented entirely on-chain, and must instead of exposed through a Foreign Function Interface~\cite{non_interactive_data_availability_proofs}, such as a precompile. This is the core idea of blockchains such as LazyLedger~\cite{lazyledger}, which completely separate consensus on execution from data availability and ordering, allowing them to achieve the scalability of sharded systems without the complexities of sharded systems. \subsubsection{Halting for Weaker Synchrony Requirements} One potentially problematic feature of the proposed scheme, along with Plasma chains (\S~\ref{sec:related:plasmachains}), channels (\S~\ref{sec:related:channels}), and other scaling proposals that rely on fraud proofs, is that users of the side chain must be online periodically for as long as they have funds on the side chain (potentially forever). Instead, we can simply \textit{halt} the side chain after a fixed, predetermined amount of time \textit{e.g.}, measured in blocks~\cite{side_chains_halting}. Then we allow a very long period of time---potentially months---during which anyone may submit a fraud proof to roll back the tip of the chain, but not extend it with new blocks. Only after this time has passed are side chain blocks considered finalized, and withdrawals can be performed non-interactively against the final state of the side chain. Users now know exactly when they have to be online to validate the chain: a known, finite time slot. If users wish to withdraw their funds early from the side chain, they may do with trustlessly via atomic swaps~\cite{atomic_swap} with a liquidity provider. \section{Related Work} \label{sec:related} A wide range of scaling techniques have been proposed over the years, and are discussed in this section. More importantly, an analysis of incentives and shortcomings for each of these techniques is shown. \subsection{Validity Proofs and Succinct Arguments of Knowledge} \label{sec:related:validityproofs} Recent years have seen the emergence of almost-practical constructions employing succinct arguments of knowledge~\cite{zksnarks,zkstarks} that are zero-knowledge. This class of protocols allows a prover to generate a proof of an arbitrary arithmetic circuit's correct execution over some input that can then be verified efficiently. It may initially seem that using these is superior to constructions that make use of fraud and data availability proofs in the context of layer-2 scaling techniques, as the latter rely on an assumption that the parent chain is readily available to post challenges to while the former always guarantees correct state execution. Unfortunately, circuit-based zero-knowledge protocols have fundamental limitations that make them inappropriate for use as a core component to scaling techniques. First, proof generation is monopolistic rather than competitive as with PoW mining. Mining is a random process~\cite{bitcoin}, and even a miner using pen-and-paper is capable of producing a block today if they get lucky; censoring other block producers requires a \textit{majority} of mining power. Proof generation for these zero-knowledge protocols on the other hand is monopolistic: the user with the lowest-latency prover will always win the race to generate proofs first when attempting to prove execution of the same circuit over the same inputs. Such a system tends towards becoming permissioned over time, especially when incentives for dispersing proving power are non-existent, and will resemble single-operator Plasma chain constructions in this regard---though without the exit game and synchrony assumptions needed by those (\S~\ref{sec:related:plasmachains}). Second, and more importantly, a completely transparent blockchain or layer-2 system can be rolled back in the event of an implementation bug---either with a forced re-organization~\cite{bitcoin_rollback,bitcoin_rollback_cve} or a forced special state transition to revert unwanted effects~\cite{thedao_hack_fix}. In contrast, in a system employing a circuit-based zero-knowledge protocol without full data availability with no further checks, a bug in either the implementation of the circuit or the trusted setup (if the protocol requires one)~\cite{zcash_bug} may result in \textit{permanent} state corruption~\cite{zcash_bug_turnstile} that cannot be recovered from save for restarting the chain from genesis. For layer-2 constructions that have full data availability and use the zero-knowledge protocol only for proving correct execution of state transitions~\cite{roll_up}, a larger surface for implementation bugs exist, as off-chain code must be implemented correctly in addition to the on-chain smart contract that verifies proofs. This is an especially egregious problem given the complexity of implementing arithmetic circuits and the current lack of mature tooling (\textit{i.e.}, formal verification, linting, etc.) for developing such programs. Attempts to alleviate this make the use of zero-knowledge proofs redundant, and reduce to a Merkle computer verification game~\cite{truebit,arbitrum}, a Plasma chain construction~\cite{plasma}, or something similar. \subsection{Merged Mining} \label{sec:related:mergedmining} Merged mining~\cite{merged_mining_namecoin,merged_mining_forum} is a means of re-using computational power across two or more chains. In order to merge mine a side chain with a parent chain, the block hash of a side chain block is included in a standardized way in the currently mined parent chain. If the block satisfies the difficulty of either chain (with the side chain traditionally having lower difficulty) then it is considered a valid proof of work for that chain, and the block is appended to the appropriate chain~\cite{alternative_chain}. This is illustrated in Figure~\ref{fig:merged_mining}, with some blocks of the parent chain ($B$) including hashes of the merge mined side chain ($S$). \begin{figure}[!h] \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, draw=black, align=center, minimum size=1cm}] \node (ib) [draw opacity=0] {\dots}; \node (b1) [right=of ib] {$B_{i}$}; \node (b2) [draw opacity=0, right=of b1] {}; \node (b3) [right=of b2] {$B_{i+1}$}; \node (b4) [right=of b3] {$B_{i+2}$}; \node (b5) [right=of b4] {$B_{i+3}$}; \node (b6) [right=of b5] {$B_{i+4}$}; \node (is) [draw opacity=0, above=0.5cm of ib] {\dots}; \node (s1) [right=of is] {$S_{j}$}; \node (s2) [right=of s1] {$S_{j+1}$}; \node (s3) [right=of s2] {$S_{j+2}$}; \node (s4) [draw opacity=0, right=of s3] {}; \node (s5) [right=of s4] {$S_{j+3}$}; \node (s6) [right=of s5] {$S_{j+4}$}; \path[->] (b1) edge (ib) (b3) edge (b1) (b4) edge (b3) (b5) edge (b4) (b6) edge (b5) ; \path[->] (s1) edge (is) (s2) edge (s1) (s3) edge (s2) (s5) edge (s3) (s6) edge (s5) ; \path[->] (b1) edge (s1) (b3) edge (s3) (b5) edge (s5) (b6) edge (s6) ; \end{tikzpicture} \caption{Merged mining.} \label{fig:merged_mining} \end{figure} Note that, when implemented with a na\"{i}ve longest-chain fork choice rule, this allows the side chain to re-use hashing power from the parent chain, but not borrow security. Since parent chain blocks cannot be used for checkpointing in this scheme (especially if the difficulty of the side chain is higher than that of the parent chain), as parent chain blocks are merely superblocks~\cite{nipopow}, the merge mined side chain only gains security against \textit{external} hashing hardware~\cite{sia_asics}. As a consequence of this, a common criticism against merged mining is the virtually zero cost of attacking a side chain by the miners of the parent chain, which does not apply to the scaling solution presented in this work. \subsection{Pegged Side Chains} \label{sec:related:sidechains} The general idea of using side chains to deploy innovations and improvements to a chain without interruption has been suggested for many years~\cite{sidechains}. Namecoin~\cite{namecoin} is one of the more prominent and early examples of a side chain that runs alongside Bitcoin and acts as a decentralized DNS. \begin{definition} A \textit{side chain} is a blockchain that validates data from one or more other blockchains (adapted from~\cite{sidechains}). \end{definition} In plain English, a side chain runs alongside a parent chain (or possibly more than one parent chain, though this configuration is not used much in practice) and ``understands'' the existence of a canonical parent chain. This allows it to \textit{yank} data (events) from the parent chain to perform actions on its own state. Note that there are no requirements on how a side chain is secured---indeed, running a side chain with its own independent consensus protocol is generally counter-productive as this will make it less secure than its parent chain. It has generally been understood that a completely trustless and secure two-way peg of assets is impossible~\cite{drivechain}, though moving assets from the parent chain to the side chain is possible using the yanking scheme described above. While there have been attempts to implement a two-way peg using light-client proofs~\cite{pos_sidechains,pow_sidechains,drivechain}, such constructions are vulnerable to a minority of block producers on the parent chain or a majority of block producers on the side chain---which presumably will be less costly to attack than the parent chain. \subsection{Plasma Chains} \label{sec:related:plasmachains} Plasma~\cite{plasma} introduced Plasma chains as a potential scaling methodology. At a high level, a Plasma chain operates in much the same manner as a side chain: funds (or, more generally, state), can be yanked from the parent chain---Ethereum---to the Plasma chain, while state can be \textit{exited} through a commit-challenge scheme known as an exit game. An \textit{operator} is usually responsible for collecting transactions into blocks and committing block hashes to the parent chain (this allows the Plasma chain to borrow security from the parent chain without having to run a permissionless consensus protocol of its own). Several variations of Plasma chain constructions have been proposed, using different data models~\cite{plasma_mvp,plasma_cash}, though they are all with significant unresolved issues. Fungible iterations of Plasma~\cite{plasma_mvp} require \textit{mass exits}, as there are no guarantees of Plasma chain liveness or safety, while non-fungible iterations of Plasma~\cite{plasma_cash} require maintaining an every-growing history of proofs, or posting linear-sized checkpoints on-chain. Operators can generally misbehave in two ways: 1) censoring a user's transactions or 2) attempting to fraudulently exit state (\textit{i.e.}, assets) back to the parent chain. Each of these are resolved by allowing users to 1) force a state transition on the Plasma chain by executing it on the parent chain or 2) prove an invalid state transition occurred on the side chain, on the parent chain (which can be done implicitly in the case of a mass exit as a response to block withholding by a malicious Plasma operator). Plasma chains are dependent on an honest majority of block producers for state safety; additionally, one critical caveat is that a Plasma chain is only trustless under the strictly stronger assumption that block space is cheaply available on the parent chain to either force a valid state transition or challenge an invalid state transition in finite time. \subsection{Channels} \label{sec:related:channels} Channels were first envisioned as payment channels~\cite{payment_channels} between two or more parties to allow them to exchange money almost instantly without waiting for transactions to be included into blocks on a blockchain. More general-purpose state channels~\cite{state_channels,counterfactual} were later described as a mechanism for participants of the channel to agree on potentially arbitrary state rather than just payments. A channel proceeds by unanimous agreement among a fixed set of channel participants to update its state. This allows them to have instant finality, as any participant can close the channel by publishing the agreed-upon latest state to the blockchain. A user that attempts to close a channel with an old state can be met with a challenge with a more recent state, which by definition is signed by all parties. While the instant finality offered by channels is undoubtedly a significant advantage over side chains and Plasma chains, unanimous agreement has several drawbacks. First, all channel participants must be online in order to sign and agree to a state update, and the set of participants is fixed at channel creation. Second, there is no way to distinguish a user who lost their copy of the most recent state with a malicious user attempting to close the channel with an old state to their advantage. As only the latest state is valid in channel schemes, users can only make \textit{copies} of their local state, not \textit{backups}---the two protect against fundamentally different classes of data failures, with copies being strictly less useful. Payment channel networks~\cite{lighting_network} aim to alleviate the problem of having a fixed participant set by allowing agreement to take place atomically between users with bidirectional payment channels open between themselves. The issues this introduces are legion, and enumerating them is outside the scope of this work. Note that similarly to Plasma chains, channels can only be made to be trustless if block space is available on the blockchain they operate on, and an assumption on an honest majority of block producers for state safety. \section{Conclusion} \label{sec:conclusion} In this work we introduce a blockchain scaling solution that is both secure and decentralized in practice, and allows for greater transaction throughput than conventional blockchain systems deployed today. In addition, several terms that have emerged in common blockchain parlance are given proper definitions so as to enable and encourage collaboration without confusion. \printbibliography \end{document}
{ "timestamp": "2020-07-27T02:04:52", "yymm": "1904", "arxiv_id": "1904.06441", "language": "en", "url": "https://arxiv.org/abs/1904.06441" }
\section{Introduction} In this paper we give an estimate for the first eigenvalue of the Laplacian of closed Riemannian manifolds with positive Ricci curvature and an almost parallel form, and show the Gromov-Hausdorff closeness to a product space for the almost equality case. One of the most famous theorem about the estimate of the first eigenvalue of the Laplacian is the Lichnerowicz-Obata theorem. Lichnerowicz showed the optimal comparison result for the first eigenvalue when the Riemannian manifold has positive Ricci curvature, and Obata showed that the equality of the Lichnerowicz estimate implies that the Riemannian manifold is isometric to the standard sphere. In the following, $\lambda_k(g)$ denotes the $k$-th eigenvalue of the Laplacian $\Delta:=-\tr_g \Hess$ acting on functions. \begin{Thm}[Lichnerowicz-Obata theorem] Take an integer $n\geq 2$. Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold. If $\Ric \geq (n-1) g$, then $\lambda_1(g)\geq n$. The equality holds if and only if $(M,g)$ is isometric to the standard sphere of radius $1$. \end{Thm} Petersen \cite{Pe1}, Aubry \cite{Au} and Honda \cite{Ho} showed the stability result of the Lichnerowicz-Obata theorem. In the following, $d_{GH}$ denotes the Gromov-Hausdorff distance and $S^n$ denotes the $n$-dimensional standard sphere of radius $1$. (see Definition \ref{DGH} for the definition of the Gromov-Hausdorff distance). \begin{Thm}[\cite{Au}, \cite{Ho}, \cite{Pe1}]\label{PA} For given an integer $n\geq 2$ and a positive real number $\epsilon>0$, there exists $\delta(n,\epsilon)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric \geq (n-1) g$ and $\lambda_n(g)\leq n+\delta$, then $d_{GH}(M,S^n)\leq \epsilon$. \end{Thm} Note that Petersen considered the pinching condition on $\lambda_{n+1}(g)$, and Aubry and Honda improved it independently. We mention some improvements of the Lichnerowicz estimate when the Riemannian manifold has a special structure. If $(M,g)$ is a real $n$-dimensional K\"{a}hler manifold with $\Ric\geq (n-1)g$, then the Lichnerowicz estimate is improved as follows: \begin{equation}\label{kae} \lambda_1(g)\geq 2(n-1). \end{equation} See \cite[Theorem 11.49]{Be} for the proof. If $(M,g)$ is a real $n$-dimensional quaternionic K\"{a}hler manifold with $\Ric\geq (n-1)g$, then we have \begin{equation}\label{qk} \lambda_1(g)\geq \frac{2n+8}{n+8}(n-1). \end{equation} See \cite{AM} for the proof. In these cases, the Riemannian manifold $(M,g)$ has a non-trivial parallel $2$ and $4$-form, respectively. When $(M,g)$ is an $n$-dimensional product Riemannian manifold $(N_1\times N_2,g_1+g_2)$ with $\Ric\geq (n-1)g$, then we have $$ \lambda_1(g)\geq \min_{i\in\{1,2\}}\left\{\frac{\dim N_i}{\dim N_i-1}\right\}(n-1), $$ and $M$ has a non-trivial parallel form if either $N_1$ or $N_2$ is orientable. Grosjean \cite{gr} gave a unified proof of the improvements of the Lichnerowicz estimate when the Riemannian manifold has a non-trivial parallel form. \begin{Thm}[\cite{gr}]\label{grosjean} Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold. Assume that $\Ric\geq (n-p-1)g$ and that there exists a nontrivial parallel $p$-form on $M$ $(2\leq p\leq n/2)$. Then, we have \begin{equation}\label{grs} \lambda_1(g)\geq n-p. \end{equation} Moreover, if $p<n/2$ and if in addition $M$ is simply connected, then the equality in $(\ref{grs})$ implies that $(M,g)$ is isometric to a product $S^{n-p}\times (X,g')$, where $(X,g')$ is some $p$-dimensional closed Riemannian manifold. \end{Thm} \begin{Rem} We give several remarks on this theorem. \begin{itemize} \item When $\Ric\geq (n-p-1)g$, the Lichnerowicz estimate is $\lambda_1(g)\geq n(n-p-1)/(n-1)$. Since $n-p>n(n-p-1)/(n-1)$ for $2\leq p\leq n/2$, the estimate (\ref{grs}) improves the Lichnerowicz estimate. \item Grosjean also showed this type theorem when $M$ has a convex smooth boundary. \item Though Grosjean originally assumed the manifold is orientable, the assumption can be easily removed by taking the orientable double covering. \item If $(M,g)$ is either a K\"{a}hler manifold with $n\geq 6$ or a quaternionic K\"{a}hler manifold, then the estimate (\ref{kae}) or (\ref{qk}) (with scaling) is stronger than (\ref{grs}). \item There exists no non-trivial parallel $1$-form on any closed Riemannian manifold with positive Ricci curvature. \item The assumption $2\leq p\leq n/2$ (resp. $2\leq p< n/2$) implies $n\geq 4$ (resp. $n\geq 5$). For the case $n=4$ and $p=n/2=2$, the complex projective space $\mathbb{C}P^2$ also satisfies the equality in (\ref{grs}). \item If there exists a non-trivial parallel $p$-form $\omega$ ($1\leq p\leq n-1$) on an $n$-dimensional Riemannian manifold $(M,g)$, then $\omega(x)\in \bigwedge^p T^\ast_x M$ ($x\in M$) is invariant under the Holonomy action, and so the Holonomy group coincides with neither $\mathrm{SO}(n)$ nor $\mathrm{O}(n)$. \end{itemize} \end{Rem} The main aim of this paper is to show the almost version of Grosjean's result. We also give the almost version of the estimate (\ref{kae}) in Appendix B. We first note that, for a closed Riemannian manifold $(M,g)$, there exists a non zero $p$-form $\omega$ with $\|\nabla \omega\|_2^2\leq \delta\|\omega\|_2^2$ for some $\delta>0$ if and only if $\lambda_1(\Delta_{C,p})\leq \delta$ holds, where $\lambda_1(\Delta_{C,p})$ is defined by $$ \lambda_1(\Delta_{C,p}):=\inf\left\{\frac{\|\nabla \omega\|_2^2}{\|\omega\|_2^2}: \omega\in\Gamma(\bigwedge^p T^\ast M) \text{ with }\omega\neq 0\right\}. $$ Let us state our eigenvalue estimate. \begin{Ma} For given integers $n\geq 4$ and $2\leq p \leq n/2$, there exists a constant $C(n,p)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$, then we have \begin{equation*} \lambda_1(g)\geq n-p-C(n,p)\lambda_1(\Delta_{C,p})^{1/2}. \end{equation*} \end{Ma} We immediately have the following corollary: \begin{Cor} For given integers $n\geq 4$ and $2\leq p \leq n/2$, there exists a constant $C(n,p)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$ and $$\frac{n(n-p-1)}{n-1}\leq \lambda_1(g)\leq n-p,$$ then we have \begin{equation*} \lambda_1(\Delta_{C,p})\geq \left(\frac{n-p-\lambda_1(g)}{C(n,p)}\right)^2. \end{equation*} \end{Cor} Note that we always have the lower bound on the eigenvalue of the Laplacian $\lambda_1(g)\geq n(n-p-1)/(n-1)$ if $\Ric_g\geq (n-p-1)g$ by the Lichnerowicz estimate. An upper bound on $C(n,p)$ is computable. However, we do not know the optimal value of it. We next state the eigenvalue pinching result. \begin{Mb} For given integers $n\geq 5$ and $2\leq p < n/2$ and a positive real number $\epsilon>0$, there exists $\delta=\delta(n,p,\epsilon)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$, \begin{equation*} \lambda_{n-p+1}(g)\leq n-p+\delta \end{equation*} and \begin{equation*} \lambda_1(\Delta_{C,p})\leq \delta, \end{equation*} then $M$ is orientable and $$d_{GH}(M,S^{n-p}\times X)\leq \epsilon,$$ where $X$ is some compact metric space. \end{Mb} \begin{Rem} In fact, we prove that there exist constants $C(n,p)>0$ and $\alpha(n)>0$ such that $$d_{GH}(M,S^{n-p}\times X)\leq C(n,p)\delta^{\alpha(n)}$$ under the assumption of Main Theorem 2. One can easily find the explicit value of $\alpha(n)$ (see Notation \ref{order} and Theorem \ref{MT2}). However, it might be far from the optimal value. By the Gromov's pre-compactness theorem, we can take $X$ to be a geodesic space. However, we lose the information about the convergence rate in that case. \end{Rem} Based on Theorem \ref{PA}, one might expect that we can replace the assumption ``$\lambda_{n-p+1}(g)\leq n-p+\delta$'' in Main Theorem 2 to the weaker assumption ``$\lambda_{n-p}(g)\leq n-p+\delta$''. However, an example shows that we cannot do it even if $\delta=0$ (see Proposition \ref{p3e}). Instead of that, replacing $\lambda_1(\Delta_{C,p})$ to $\lambda_1(\Delta_{C,n-p})$, we have the following theorems: \begin{Md} For given integers $n\geq 4$ and $2\leq p \leq n/2$, there exists a constant $C(n,p)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$, then we have \begin{equation*} \lambda_1(g)\geq n-p-C(n,p)\lambda_1(\Delta_{C,n-p})^{1/2}. \end{equation*} \end{Md} \begin{Me} For given integers $n\geq 5$ and $2\leq p < n/2$ and a positive real number $\epsilon>0$, there exists $\delta=\delta(n,p,\epsilon)>0$ such that if $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$, \begin{equation*} \lambda_{n-p}(g)\leq n-p+\delta \end{equation*} and \begin{equation*} \lambda_1(\Delta_{C,n-p})\leq \delta, \end{equation*} then we have $$d_{GH}(M,S^{n-p}\times X)\leq \epsilon,$$ where $X$ is some compact metric space. \end{Me} Note that the assumption ``$\lambda_1(\Delta_{C,n-p})\leq \delta$'' is equivalent to the assumption ``$\lambda_1(\Delta_{C,p})\leq \delta$'' if the manifold is orientable. We would like to point out that our work was motivated by Honda's spectral convergence theorem \cite{Ho2}, which asserts the continuity of the eigenvalues of the connection Laplacian $\Delta_{C,p}$ acting on $p$-forms with respect to the non-collapsing Gromov-Hausdorff convergence assuming the two-sided bound on the Ricci curvature. By virtue of his theorem, we can generalize our main theorems to Ricci limit spaces under such assumptions. Note that we show our main theorems without the non-collapsing assumption, i.e., without assuming the lower bound on the volume of the Riemannian manifold. Our work was also motivated by the Cheeger-Colding almost splitting theorem (see \cite[Theorem 9.25]{Ch}), whose conclusion is the Gromov-Hausdorff approximation to a product $\mathbb{R}\times X$. As the almost splitting theorem, we need to show the almost Pythagorean theorem under the assumption of Main Theorem 2. The structure of this paper is as follows. In section 2, we recall some basic definitions and facts, and give calculations of differential forms. In section 3, we estimate the error terms of the Grosjean's formula when the Riemannian manifold has a non-trivial almost parallel $p$-form. As a consequence, we prove Main Theorem 1 and Main Theorem 3. In section 4, we prove Main Theorem 2 and Main Theorem 4. In subsection 4.1, we list some useful techniques for our pinching problem. In subsection 4.2, we show some pinching conditions on the eigenfunctions along geodesics under the assumption $\lambda_k(g)\leq n-p+\delta$ and $\lambda_1(\Delta_{C,p})\leq \delta$. In subsection 4.3, we show that similar results hold under the assumption $\lambda_k(g)\leq n-p+\delta$ and $\lambda_1(\Delta_{C,n-p})\leq \delta$. In subsection 4.4, we show that the eigenfunctions are almost cosine functions in some sense under our pinching condition. In subsection 4.5, we construct an approximation map and show Main Theorem 2 except for the orientability. In subsection 4.6, we give some lemmas to prove the remaining part of main theorems. In subsection 4.7, we show the orientability of the manifold under the assumption of Main Theorem 2, and complete the proof of it. In subsection 4.8, we show that the assumption of Main Theorem 4 implies that $\lambda_{n-p+1}(g)$ is close to $n-p$, and complete the proof of Main Theorem 4. In Appendix A, we discuss Ricci limit spaces. Using the technique of subsection 4.7, we show the stability of unorientability under the non-collapsing Gromov-Hausdorff convergence assuming the two-sided bound on the Ricci curvature and the upper bound on the diameter. In Appendix B, we give the almost version of the estimate (\ref{kae}) assuming that there exists a $2$-form $\omega$ which satisfies that $\|\nabla \omega\|_2$ and $\|J_\omega^2+\Id\|_1$ are small, where $J_\omega\in\Gamma(T^\ast M\otimes T M)$ is defined so that $\omega=g(J_\omega\cdot,\cdot)$. \begin{sloppypar} {\bf Acknowledgments}.\ I am grateful to my supervisor, Professor Shinichiroh Matsuo for his advice. I also thank Professor Shouhei Honda for helpful discussions about the orientability of Ricci limit spaces. I thank Shunsuke Kano for the discussions about the examples. The works in section 3 were done during my stay at the University of C\^{o}te d'Azur. I would like to thank Professor Erwann Aubry for his warm hospitality. I am grateful to the referee for careful reading of the paper and making valuable suggestions. This work was supported by JSPS Overseas Challenge Program for Young Researchers and by JSPS Research Fellowships for Young Scientists (JSPS KAKENHI Grant Number JP18J11842). \end{sloppypar} \section{Preliminaries} \subsection{Basic Definitions} We first recall some basic definitions and fix our convention. \begin{Def}[Hausdorff distance]\label{Dhau} Let $(X,d)$ be a metric space. For each point $x_0\in X$, subsets $A,B\subset X$ and $r>0$, define \begin{align*} d(x_0,A):=&\inf\{d(x_0,a):a\in A\},\\ B_{r}(x_0):=&\{x\in X: d(x,x_0)<r\},\\ B_{r}(A):=&\{x\in X:d(x,A)<r\},\\ d_{H,d}(A,B):=&\inf\{\epsilon>0:A\subset B_{\epsilon}(B) \text{ and } B\subset B_{\epsilon}(A)\} \end{align*} We call $d_{H,d}$ the Hausdorff distance. \end{Def} The Hausdorff distance defines a metric on the collection of compact subsets of $X$. \begin{Def}[Gromov-Hausdorff distance]\label{DGH} Let $(X,d_X),(Y,d_Y)$ be metric spaces. Define \begin{align*} d_{GH}(X,Y):=\inf\Big\{d_{H,d}(X,Y): &\text{ $d$ is a metric on $X\coprod Y$ such that}\\ &\qquad\qquad\quad\text{$d|_X=d_X$ and $d|_Y=d_Y$}\Big\}. \end{align*} \end{Def} The Gromov-Hausdorff distance defines a metric on the set of isometry classes of compact metric spaces (see \cite[Proposition 11.1.3]{Pe3}). \begin{Def}[$\epsilon$-Hausdorff approximation map]\label{hap} Let $(X,d_X),(Y,d_Y)$ be metric spaces. We say that a map $f\colon X\to Y$ is an $\epsilon$-Hausdorff approximation map for $\epsilon>0$ if the following two conditions hold. \begin{itemize} \item[(i)] For all $a,b\in X$, we have $|d_X(a,b)-d_Y(f(a),f(b))|< \epsilon$, \item[(ii)] $f(X)$ is $\epsilon$-dense in $Y$, i.e., for all $y\in Y$, there exists $x\in X$ with $d_Y(f(x),y)< \epsilon$. \end{itemize} \end{Def} If there exists an $\epsilon$-Hausdorff approximation map $f\colon X\to Y$, then we can show that $d_{GH}(X,Y)\leq 3\epsilon/2$ by considering the following metric $d$ on $X\coprod Y$: \begin{empheq}[left={d(a,b)=\empheqlbrace}]{align*} &\qquad d_X(a,b)&& (a,b\in X),\\ &\,\frac{\epsilon}{2} +\inf_{x\in X}(d_X(a,x)+d_Y(f(x),b))&&(a\in X,\,b\in Y),\\ &\qquad d_Y(a,b)&&(a,b\in Y). \end{empheq} If $d_{GH}(X,Y)< \epsilon$, then there exists a $2\epsilon$-Hausdorff approximation map from $X$ to $Y$. Let $C(u_1,\ldots,u_l)>0$ denotes a positive function depending only on the numbers $u_1,\ldots,u_l$. For a set $X$, $\Card X$ denotes a cardinal number of $X$. Let $(M,g)$ be a closed Riemannian manifold. For any $p\geq 1$, we use the normalized $L^p$-norm: \begin{equation*} \|f\|_p^p:=\frac{1}{\Vol(M)}\int_M |f|^p\,d\mu_g, \end{equation*} and $\|f\|_{\infty}:=\mathop{\mathrm{ess~sup}}\limits_{x\in M}|f(x)|$ for a measurable function $f$ on $M$. We also use these notation for tensors. We have $\|f\|_p\leq \|f\|_q$ for any $p\leq q \leq \infty$. Let $\nabla$ denotes the Levi-Civita connection. Throughout in this paper, $0=\lambda_0(g)< \lambda_1(g) \leq \lambda_2(g) \leq\cdots \to \infty$ denotes the eigenvalues of the Laplacian $\Delta=-\tr\Hess$ acting on functions. We sometimes identify $TM$ and $T^\ast M$ using the metric $g$. Given points $x,y\in M$, let $\gamma_{x,y}$ denotes one of minimal geodesics with unit speed such that $\gamma_{x,y}(0)=x$ and $\gamma_{x,y}(d(x,y))=y$. For given $x\in M$ and $u\in T_x M$ with $|u|=1$, let $\gamma_{u}\colon \mathbb{R}\to M$ denotes the geodesic with unit speed such that $\gamma_u(0)=x$ and $\dot{\gamma}_u(0)=u$. For any $x\in M$ and $u\in T_x M$ with $|u|=1$, put $$ t(u):=\sup\{t\in\mathbb{R}_{>0}: d(x,\gamma_u(t))=t\}, $$ and define $I_x\subset M$ to be the complement of the cut locus at $x$ (see also \cite[p.104]{Sa}), i.e., $$ I_x:=\{\gamma_u (t): u\in T_x M \text{ with $|u|=1$ and } 0\leq t< t(u)\}. $$ Then, $I_x$ is open and $\Vol(M\setminus I_x)=0$ \cite[III Lemma 4.4]{Sa}. For any $y\in I_x\setminus \{x\}$, the minimal geodesic $\gamma_{x,y}$ is uniquely determined. The function $d(x,\cdot)\colon M\to \mathbb{R}$ is differentiable in $I_x\setminus\{x\}$ and $\nabla d(x,\cdot)(y)=\dot{\gamma}_{x,y}(d(x,y))$ holds for any $y\in I_x\setminus \{x\}$ \cite[III Proposition 4.8]{Sa}. Let $V$ be an $n$-dimensional real vector space with an inner product $\langle,\rangle$. We define inner products on $\bigwedge^k V$ and $V\otimes \bigwedge^k V$ as follows: \begin{equation*} \begin{split} &\langle v_1\wedge\ldots\wedge v_k,w_1\wedge \ldots\wedge w_k\rangle=\det \{\langle v_i,w_j\rangle\}_{i,j},\\ &\langle v_0\otimes v_1\wedge\ldots\wedge v_k,w_0\otimes w_1\wedge \ldots\wedge w_k\rangle=\langle v_0,w_0\rangle \det \{\langle v_i,w_j\rangle \}_{i,j}, \end{split} \end{equation*} for $v_0,\ldots,v_k,w_0,\ldots,w_k\in V$. For $\alpha\in V$ and $\omega\in \bigwedge^k V$, there exists a unique $\iota(\alpha)\omega\in \bigwedge^{k-1} V$ such that $\langle\iota(\alpha)\omega,\eta\rangle=\langle\omega,\alpha \wedge \eta\rangle$ holds for any $\eta\in \bigwedge^{k-1} V $. If $k=0$, we define $\iota(\alpha)\omega=0$ and $\bigwedge^{-1}V=\{0\}$. Then, $\iota$ defines a bi-linear map: $$ \iota\colon V\times \bigwedge^k V\to \bigwedge^{k-1} V. $$ By identifying $V$ and $V^\ast$ using $\langle,\rangle$, we also use the notation $\iota$ for the bi-linear map: $$ \iota\colon V^\ast \times \bigwedge^k V\to \bigwedge^{k-1} V. $$ For any Riemannian manifold $(M,g)$, we define operators $\nabla^\ast \colon \Gamma(T^\ast M\otimes \bigwedge^k T^\ast M)\to \Gamma(\bigwedge^k T^\ast M)$ and $d^\ast \colon \Gamma(\bigwedge^k T^\ast M)\to \Gamma(\bigwedge^{k-1}T^\ast M)$ by \begin{align*} \nabla^\ast(\alpha\otimes \beta):&=-\tr_{T^\ast M} \nabla(\alpha\otimes \beta) =-\sum_{i=1}^n \left(\nabla_{e_i}\alpha\right)(e_i)\cdot \beta-\sum_{i=1}^n\alpha(e_i)\cdot\nabla_{e_i}\beta.\\ d^\ast \omega:&=-\sum_{i=1}^n\iota(e_i)\nabla_{e_i}\omega \end{align*} for all $\alpha\otimes\beta\in \Gamma(T^\ast M\otimes\bigwedge^k T^\ast M)$ and $\omega\in\Gamma(\bigwedge^k T^\ast M)$, where $n=\dim M$ and $\{e_1,\ldots,e_n\}$ is an orthonormal basis of $TM$. If $M$ is closed, then we have \begin{align*} \int_M \langle T,\nabla \alpha\rangle\,d\mu_g&=\int_M \langle \nabla^\ast T, \alpha\rangle\,d\mu_g,\\ \int_M \langle \omega,d\eta \rangle\,d\mu_g&=\int_M \langle d^\ast \omega, \eta \rangle\,d\mu_g \end{align*} for all $T\in\Gamma(T^\ast M\otimes\bigwedge^k T^\ast M)$, $\alpha\in\Gamma(\bigwedge^k T^\ast M)$, $\omega\in\Gamma(\bigwedge^k T^\ast M)$ and $\eta\in\Gamma(\bigwedge^{k-1} T^\ast M)$ by the divergence theorem. The Hodge Laplacian $\Delta\colon \Gamma(\bigwedge^k T^\ast M)\to\Gamma(\bigwedge^k T^\ast M)$ is defined by $$ \Delta:=d d^\ast +d^\ast d. $$ \begin{notation} For an $n$-dimensional Riemannian manifold $(M,g)$, we can take orthonormal basis of $TM$ only locally in general. However, for example, the tensor $$ \sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i} \nabla f)\omega\in \Gamma(T^\ast M\otimes \bigwedge^{k-1} T^\ast M)\quad (f\in C^\infty(M),\,\omega\in \Gamma(\bigwedge^k T^\ast M)) $$ is defined independently of the choice of the orthonormal basis $\{e_1,\ldots,e_n\}$ of $TM$, where $\{e^1,\ldots,e^n\}$ denotes its dual. Thus, we sometimes use such notation without taking a particular orthonormal basis. \end{notation} Finally, we list some important notation. Let $(M,g)$ be a closed Riemannian manifold. \begin{itemize} \item $d$ denotes the Riemannian distance function. \item $\Ric$ denotes the Ricci curvature tensor. \item $\diam$ denotes the diameter. \item $\Vol$ or $\mu_g$ denotes the Riemannian volume measure. \item$\|\cdot\|_p$ denotes the normalized $L^p$-norm for each $p\geq 1$, which is defined by \begin{equation*} \|f\|_p^p:=\frac{1}{\Vol(M)}\int_M |f|^p\,d\mu_g \end{equation*} for any measurable function $f$ on $M$. \item $\|f\|_{\infty}$ denotes the essential sup of $|f|$ for any measurable function $f$ on $M$. \item $\nabla$ denotes the Levi-Civita connection. \item $\nabla^2$ denotes the Hessian for functions. \item $\Delta\colon \Gamma(\bigwedge^k T^\ast M)\to\Gamma(\bigwedge^k T^\ast M)$ denotes the Hodge Laplacian defined by $\Delta:=d d^\ast +d^\ast d$. We frequently use the Laplacian acting on functions. Note that $\Delta=-\tr_g \nabla^2$ holds for functions under our sign convention. \item $0=\lambda_0(g)< \lambda_1(g) \leq \lambda_2(g) \leq\cdots \to \infty$ denotes the eigenvalues of the Laplacian acting on functions. \item $\gamma_{x,y}\colon [0,d(x,y)]\to M$ denotes one of minimal geodesics with unit speed such that $\gamma_{x,y}(0)=x$ and $\gamma_{x,y}(d(x,y))=y$ for any $x,y\in M$. \item $\gamma_{u}\colon \mathbb{R}\to M$ denotes the geodesic with unit speed such that $\gamma_u(0)=x$ and $\dot{\gamma}(0)=u$ for any $x\in M$ and $u\in T_x M$ with $|u|=1$. \item $I_x\subset M$ denotes the complement of the cut locus at $x\in M$. We have $\Vol(M\setminus I_x)=0$. We have that $\gamma_{x,y}$ is uniquely determined and $\nabla d(x,\cdot)=\dot{\gamma}_{x,y}(d(x,y))$ holds for any $y\in I_x\setminus\{x\}$. \item $\Delta_{C,k}=\nabla^\ast \nabla\colon \Gamma(\bigwedge^k T^\ast M)\to \Gamma(\bigwedge^k T^\ast M)$ denotes the connection Laplacian acting on $k$-forms. \item $0\leq \lambda_1(\Delta_{C,k}) \leq \lambda_2(\Delta_{C,k}) \leq\cdots \to \infty$ denotes the eigenvalues of the connection Laplacian $\Delta_{C,k}$ acting on $k$-forms. \item $S^n(r)$ denotes the $n$-dimensional standard sphere of radius $r$. \item $S^n:=S^n(1)$. \end{itemize} Note that the lowest eigenvalue of the Laplacian $\Delta$ acting on function is always equal to $0$, and so we start counting the eigenvalues of it from $i=0$. This is not the case with the connection Laplacian $\Delta_{C,k}$ acting on $k$-forms, and so we start counting the eigenvalues of it from $i=1$. For any $i\in\mathbb{Z}_{>0}$, we have $$ \lambda_i(\Delta_{C,0})=\lambda_{i-1}(g). $$ \subsection{Calculus of Differential Forms} In this subsection, we recall some facts about differential forms, and do some calculations. We first recall the decomposition: \begin{equation*} T^\ast M\otimes \bigwedge^k T^\ast M=T^{k,1}M\oplus\bigwedge^{k+1} T^\ast M\oplus \bigwedge^{k-1} T^\ast M. \end{equation*} See also \cite[Section 2]{Se}. Let $V$ be an $n$-dimensional real vector space with an inner product $\langle,\rangle$. We put \begin{equation*} \begin{split} &P_1\colon V\otimes \bigwedge^k V\to \bigwedge^{k+1} V,\quad P_1(\alpha\otimes \omega):=\left(\frac{1}{k+1}\right)^\frac{1}{2}\alpha\wedge\omega,\\ &P_2\colon V\otimes \bigwedge^k V\to \bigwedge^{k-1} V,\quad P_2(\alpha\otimes \omega):=\left(\frac{1}{n-k+1}\right)^\frac{1}{2}\iota(\alpha)\omega,\\ &Q_1\colon \bigwedge^{k+1} V\to V\otimes \bigwedge^k V,\quad Q_1(\zeta):=\left(\frac{1}{k+1}\right)^\frac{1}{2}\sum_{i=1}^n e^i\otimes\iota(e^i)\zeta,\\ &Q_2\colon \bigwedge^{k-1} V\to V\otimes \bigwedge^k V,\quad Q_2(\eta):=\left(\frac{1}{n-k+1}\right)^\frac{1}{2}\sum_{i=1}^n e^i\otimes e^i\wedge\eta, \end{split} \end{equation*} where $\{e^1,\ldots,e^n\}$ is orthonormal basis of $V$. Then, we have \begin{itemize} \item $\Imag Q_1\bot \Imag Q_2$, \item $P_i\circ Q_i=\Id$ for each $i=1,2$, \item $Q_1$ and $Q_2$ preserve the norms, \item $Q_i\circ P_i\colon V\otimes \bigwedge^k V\to V\otimes \bigwedge^k V$ is symmetric and $(Q_i\circ P_i)^2=Q_i\circ P_i$ for each $i=1,2$. \end{itemize} Therefore, $Q_i\circ P_i$ is the orthogonal projection $V\otimes \bigwedge^k V\to \Imag Q_i$. Since $\bigwedge^{k+1} V\cong \Imag Q_1$ and $\bigwedge^{k-1} V \cong\Imag Q_2$, we can regard $\bigwedge^{k+1} V$ and $\bigwedge^{k-1} V$ as subspaces of $V\otimes \bigwedge^k V$. Take an $n$-dimensional Riemannian manifold $(M,g)$ and consider the case when $V=T^\ast_x M$ ($x\in M$). We can take a sub-bundle $T^{k,1}M$ of $T^\ast M\otimes \bigwedge^k T^\ast M$ such that \begin{equation*} T^\ast M\otimes \bigwedge^k T^\ast M=T^{k,1}M\oplus\bigwedge^{k+1} T^\ast M\oplus \bigwedge^{k-1} T^\ast M \end{equation*} is an orthogonal decomposition. Then, for $\omega\in\Gamma(\bigwedge^k T^\ast M)$, we can decompose $\nabla \omega\in \Gamma(T^\ast M\otimes\bigwedge^k T^\ast M)$, the $\bigwedge^{k+1} T^\ast M$-component is equal to $\left(1/(k+1)\right)^{1/2}d\omega$ and the $\bigwedge^{k-1} T^\ast M$-component is equal to $-\left(1/(n-k+1)\right)^{1/2} d^\ast \omega$. Let $T(\omega)$ denotes the remaining part ($T\colon \Gamma(\bigwedge^k T^\ast M)\to \Gamma(T^{k,1}M)$). Then, we have \begin{equation*} \nabla \omega=T(\omega)+ \left(\frac{1}{k+1}\right)^\frac{1}{2} Q_1(d\omega)-\left(\frac{1}{n-k+1}\right)^\frac{1}{2}Q_2(d^\ast w). \end{equation*} Therefore, we get \begin{equation}\label{2b} |\nabla\omega|^2=|T(\omega)|^2+\frac{1}{k+1} |d\omega|^2+\frac{1}{n-k+1}|d^\ast \omega|^2. \end{equation} If $d^\ast \omega=0$ and $T(\omega)=0$, then $\omega$ is called a Killing k-form (see also \cite[Definition 2.1]{Se}). We next recall the Bochner-Weitzenb\"ock formula. \begin{Def}\label{p2a} Let $(M,g)$ be an $n$-dimensional Riemannian manifold. We define a homomorphism $\mathcal{R}_k\colon \bigwedge^k T^\ast M\to \bigwedge^k T^\ast M$ as \begin{equation*} \mathcal{R}_k \omega=-\sum_{i,j}e^i\wedge \iota(e_j)\left(R(e_i,e_j)\omega\right) \end{equation*} for any $\omega\in\bigwedge^k T^\ast M$, where $\{e_1,\ldots,e_n\}$ is an orthonormal basis of $TM$, $\{e^1,\ldots,e^n \}$ is its dual and $R(e_i,e_j)\omega$ is defined by $$ R(e_i,e_j)\omega=\nabla_{e_i}\nabla_{e_j}\omega-\nabla_{e_j}\nabla_{e_i}\omega-\nabla_{[e_i,e_j]}\omega\in \Gamma(\bigwedge^k T^\ast M). $$ \end{Def} Note that if $k=1$, then we have $\mathcal{R}_1 \omega=\Ric (\omega,\cdot)$ for any $\omega\in\Gamma(T^\ast M)$. The Bochner-Weitzenb\"ock formula is stated as follows: \begin{Thm}[Bochner-Weitzenb\"ock formula]\label{p2b} For any $\omega\in\Gamma (\bigwedge^k T^\ast M)$, we have \begin{equation*} \Delta\omega=\nabla^\ast \nabla \omega+\mathcal{R}_k \omega. \end{equation*} \end{Thm} In particular, we have the following theorem when $k=1$: \begin{Thm}[Bochner-Weitzenb\"ock formula for 1-forms] For any $\omega\in\Gamma(T^\ast M)$, we have \begin{equation*} \Delta \omega =\nabla^\ast \nabla \omega + \Ric(\omega,\cdot). \end{equation*} \end{Thm} Let us do some calculations of differential forms. \begin{Lem}\label{p2c} Let $(M,g)$ be an $n$-dimensional Riemannian manifold. Take a vector field $X\in\Gamma(TM)$, a $p$-form $\omega\in\Gamma(\bigwedge^p T^\ast M)$ $(p\geq 1)$ and a local orthonormal bases $\{e_1,\ldots,e_n\}$ of $TM$. \begin{itemize} \item[(i)] We have $$\mathcal{R}_{p-1}(\iota(X)\omega)=\iota(X) \mathcal{R}_p \omega+\iota(\Ric(X))\omega+2\sum_{i=1}^n\iota(e_i)(R(X,e_i)\omega).$$ \item[(ii)] We have $$\Delta (\iota(X)\omega)=\iota(\Delta X)\omega+\iota(X)\Delta \omega +2\sum_{i=1}^n\iota(e_i) (R(X,e_i)\omega)-2\sum_{i=1}^n\iota(\nabla_{e_i}X) (\nabla_{e_i}\omega).$$ \item[(iii)] We have $$\sum_{i=1}^n\iota(e_i) (R(X,e_i)\omega) =-\nabla_X d^\ast \omega +d^\ast \nabla_X \omega+\sum_{i,j=1}^n \langle \nabla_{e_j} X, e_i\rangle\iota(e_j)\nabla_{e_i}\omega.$$ \end{itemize} \begin{proof} Let $\{e^1,\ldots,e^n\}$ be the dual basis of $\{e_1,\ldots,e_n\}$. We first show (i). If $p=1$, both sides are equal to $0$. Let us assume $p\geq 2$. We have \begin{equation}\label{2c} \begin{split} &\iota(\Ric(X))\omega\\ =&\frac{1}{(p-1)!}\sum_{i,i_1,\ldots,i_{p-1}}\omega(R(X,e_i)e_i,e_{i_1},\cdots,e_{i_{p-1}})e^{i_1}\wedge\cdots\wedge e^{i_{p-1}}\\ =&\frac{-1}{(p-1)!}\sum_{i,i_1,\ldots,i_{p-1}} (R(X,e_i)\omega)(e_i,e_{i_1},\ldots,e_{i_n})e^{i_1}\wedge\cdots\wedge e^{i_{p-1}}\\ &-\frac{1}{(p-1)!}\sum_{i,i_1,\ldots,i_{p-1}} \sum_{l=1}^{p-1} \omega(e_i,e_{i_1},\cdots,R(X,e_i)e_{i_l},\ldots,e_{i_{p-1}})e^{i_1}\wedge\cdots\wedge e^{i_{p-1}}\\ =&-\sum_{i=1}^n\iota(e_i)(R(X,e_i)\omega)\\ &-\frac{1}{(p-1)!}\sum_{i,i_1,\ldots,i_{p-1}} \sum_{l=1}^{p-1} \omega(e_i,e_{i_1},\cdots,R(X,e_i)e_{i_l},\ldots,e_{i_{p-1}})e^{i_1}\wedge\cdots\wedge e^{i_{p-1}} \end{split} \end{equation} We calculate the second term. \begin{equation*} \begin{split} -&\frac{1}{(p-1)!}\sum_{i,i_1,\ldots,i_{p-1}} \sum_{l=1}^{p-1} \omega(e_i,e_{i_1},\cdots,R(X,e_i)e_{i_l},\ldots,e_{i_{p-1}})e^{i_1}\wedge\cdots\wedge e^{i_{p-1}}\\ =&\frac{1}{(p-1)!}\sum_{l=1}^{p-1} \sum_{i,j,i_1,\ldots,i_{p-1}}\langle R(e_j,e_{l_l})X,e_i\rangle\omega(e_i,e_j,e_{i_1},\cdots,\widehat{e_{i_l}},\ldots,e_{i_{p-1}})\\ &\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad e^{i_l}\wedge e^{i_1}\wedge\cdots\wedge\widehat{e^{i_l}}\wedge\cdots\wedge e^{i_{p-1}}\\ =&\sum_{j,k} e^k\wedge\iota(e_j)\iota(R(e_j,e_{k})X)\omega\\ =&\sum_{j,k} e^k\wedge\iota(e_j)R(e_j,e_{k})(\iota(X)\omega)-\sum_{j,k} e^k\wedge\iota(e_j)\iota(X)R(e_j,e_{k})\omega\\ =&\mathcal{R}_{p-1}(\iota(X)\omega)-\iota(X)\mathcal{R}_{p}\omega -\sum_{j=1}^n \iota(e_j)(R(X,e_j)\omega) \end{split} \end{equation*} Combining this and (\ref{2c}), we get (i). Let us show (ii). We have \begin{equation*} \nabla^\ast \nabla \iota(X)\omega =\iota(\nabla^\ast \nabla X)\omega -2\sum_{i} \iota(\nabla_{e_i}X)\nabla_{e_i}\omega+\iota(X)\nabla^\ast \nabla\omega. \end{equation*} Thus, by (i), we get \begin{equation*} \begin{split} \Delta ( \iota(X)\omega) =&\nabla^\ast \nabla \iota(X)\omega+\mathcal{R}_{p-1}\iota(X)\omega\\ =&\iota(\Delta X)\omega+\iota(X)\Delta \omega +2\sum_{i=1}^n\iota(e_i) (R(X,e_i)\omega)-2\sum_{i=1}^n\iota(\nabla_{e_i}X) (\nabla_{e_i}\omega). \end{split} \end{equation*} This gives (ii). Finally, we show (iii). We have \begin{equation*} \begin{split} \sum_{i=1}^n \iota(e_i)(R(X,e_i)\omega) =&\sum_{i=1}^n \iota(e_i)\left(\nabla_{X}\nabla_{e_i}\omega-\nabla_{e_i}\nabla_X\omega-\nabla_{\nabla_{X} e_i}\omega+\nabla_{\nabla_{e_i}X}\omega\right)\\ =&-\nabla_{X}d^\ast \omega+d^\ast \nabla_X\omega+\sum_{i,j=1}^n \langle \nabla_{e_j} X, e_i\rangle\iota(e_j)\nabla_{e_i}\omega. \end{split} \end{equation*} This gives (iii). \end{proof} \end{Lem} When $\omega$ is parallel, we have the following corollary. \begin{Cor}\label{p2d} Let $(M,g)$ be an $n$-dimensional Riemannian manifold. Take a vector field $X\in\Gamma(TM)$ and a parallel $p$-form $\omega\in\Gamma(\bigwedge^p T^\ast M)$ $(p\geq 1)$. \begin{itemize} \item[(i)] We have $$\mathcal{R}_{p-1}(\iota(X)\omega)=\iota(\Ric(X))\omega.$$ \item[(ii)] We have $$\Delta (\iota(X)\omega)=\iota(\Delta X)\omega.$$ \end{itemize} \end{Cor} Finally, we give some easy equations for later use. Let $(M,g)$ be an $n$-dimensional Riemannian manifold. Take a local orthonormal basis $\{e_1,\ldots,e_n\}$ of $TM$. Let $\{e^1,\ldots,e^n\}$ be its dual. For any $\omega,\eta\in\Gamma(\bigwedge^k T^\ast M)$, we have $$ \sum_{i=1}^n \langle e^i\wedge \omega, e^i\wedge \eta \rangle=(n-k)\langle\omega,\eta\rangle, \quad \sum_{i=1}^n \langle \iota(e_i)\omega, \iota(e_i) \eta \rangle=k\langle\omega,\eta\rangle. $$ For any $\alpha_1,\ldots,\alpha_k\in \Gamma(T^\ast M)$, we have $$ Q_1(\alpha_1\wedge\cdots\wedge\alpha_k)=\left(\frac{1}{k}\right)^{1/2}\sum_{i=1}^k(-1)^{i-1}\alpha_i\otimes\alpha_1\wedge\cdots\wedge\widehat{\alpha_i}\wedge\cdots\wedge\alpha_k. $$ Since $Q_1$ preserves the norms, we have \begin{equation}\label{q1k} \begin{split} &k\left|\alpha_1\wedge\cdots\wedge\alpha_k\right|^2\\ =&\left|\sum_{i=1}^k(-1)^{i-1}\alpha_i\otimes\alpha_1\wedge\cdots\wedge\widehat{\alpha_i}\wedge\cdots\wedge\alpha_k\right|^2 \end{split} \end{equation} for any $\alpha_1,\ldots,\alpha_k\in \Gamma(T^\ast M)$. Suppose that $M$ is oriented. For any $k$, the Hodge star operator $\ast\colon \bigwedge^k T^\ast M\to \bigwedge^{n-k} T^\ast M$ is defined so that $$ \langle\ast\omega,\eta \rangle V_g=\omega\wedge\eta $$ for all $\omega\in\Gamma(\bigwedge^k T^\ast M)$ and $\eta\in\Gamma(\bigwedge^{n-k} T^\ast M)$, where $V_g$ denotes the volume form on $(M,g)$. For any $\alpha\in\Gamma(T^\ast M)$, $\omega\in\Gamma(\bigwedge^k T^\ast M)$ and $\eta\in\Gamma(\bigwedge^{k-1} T^\ast M)$, we have \begin{align*} \langle\ast(\omega \wedge \alpha),\eta\rangle V_g&=\omega \wedge \alpha \wedge \eta,\\ \langle\iota(\alpha)\ast \omega,\eta\rangle V_g=\langle\ast \omega,\alpha\wedge \eta\rangle V_g&=\omega\wedge \alpha \wedge \eta. \end{align*} Thus, we get \begin{equation}\label{hstar2} \ast(\omega \wedge \alpha)=\iota(\alpha)\ast \omega. \end{equation} Therefore, for any $\alpha,\beta\in\Gamma(T^\ast M)$ and $\omega,\eta\in\Gamma(\bigwedge^k T^\ast M)$, we have \begin{equation*} \begin{split} &\langle\iota (\alpha)\omega,\iota(\beta)\eta\rangle =\langle\omega,\alpha \wedge \iota(\beta)\eta\rangle\\ =&-\langle\beta \wedge \omega,\alpha \wedge \eta\rangle+\langle\alpha,\beta\rangle\langle\omega,\eta\rangle =-\langle\iota(\beta)\ast \omega,\iota(\alpha)\ast \eta\rangle+\langle\alpha,\beta\rangle\langle\omega,\eta\rangle, \end{split} \end{equation*} and so \begin{equation}\label{hstar} \langle\iota (\alpha)\omega,\iota(\beta)\eta\rangle+\langle\iota(\beta)\ast \omega,\iota(\alpha)\ast \eta\rangle=\langle\alpha,\beta\rangle\langle\omega,\eta\rangle. \end{equation} \section{Almost Parallel $p$-form} In this section, we show Main Theorems 1 and 3. \subsection{Parallel $p$-form} In this subsection, we show some easy results when the Riemannian manifold has a non-trivial parallel $p$-form. We first give an easy proof of what Grosjean called a new Bochner-Reilly formula \cite[Proposition 3.1]{gr} for closed Riemannian manifolds with a non-trivial parallel $p$-form $\omega$. Similarly, we also get the formula \cite[Proposition 3.1]{gr} for Riemannian manifold with boundary. In the next subsection, we estimate the error terms when $\omega$ is not parallel. \begin{Prop}[Bochner-Reilly-Grosjean formula \cite{gr}]\label{p3a} Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold. For any $f\in C^\infty(M)$ and any parallel $p$-form $\omega$ $(1\leq p \leq n-1)$ on $M$, we have \begin{equation*} \begin{split} &\int_M |T (\iota(\nabla f)\omega)|^2\,d\mu_g\\ =&\frac{p-1}{p}\int_M\langle\iota(\nabla f)\omega, \iota(\nabla\Delta f)\omega\rangle \,d\mu_g-\int_M \langle\iota(\Ric(\nabla f))\omega,\iota(\nabla f)\omega\rangle\,d\mu_g. \end{split} \end{equation*} See subsection 2.2 for the definition of $T\colon \Gamma(\bigwedge^{p-1}T^\ast M)\to \Gamma(T^{p-1,1}M)$. \end{Prop} \begin{proof} Since $d^\ast \iota(\nabla f) \omega=-d^\ast d^\ast(f\omega)=0$, we have \begin{equation}\label{3a} \begin{split} &\int_M \langle\iota(\Ric(\nabla f))\omega,\iota(\nabla f)\omega\rangle\,d\mu_g\\ =&\int_M \langle\mathcal{R}_{p-1}(\iota(\nabla f)\omega),\iota(\nabla f)\omega\rangle\,d\mu_g\\ =&\int_M \langle d(\iota(\nabla f)\omega),d(\iota(\nabla f)\omega)\rangle\,d\mu_g -\int_M \langle\nabla(\iota(\nabla f)\omega),\nabla(\iota(\nabla f)\omega)\rangle\,d\mu_g \end{split} \end{equation} by Corollary \ref{p2d} (i), Bochner-Weitzenb\"{o}ck formula and the divergence theorem. By (\ref{2b}) and Corollary \ref{p2d} (ii), we have \begin{equation}\label{3b} \begin{split} &\int_M \langle d(\iota(\nabla f)\omega),d(\iota(\nabla f)\omega)\rangle\,d\mu_g -\int_M \langle\nabla(\iota(\nabla f)\omega),\nabla(\iota(\nabla f)\omega)\rangle\,d\mu_g\\ =&\frac{p-1}{p}\int_M \langle \iota(\nabla\Delta f)\omega),\iota(\nabla f)\omega\rangle\,d\mu_g-\int_M |T(\iota(\nabla f)\omega)|^2\,d\mu_g \end{split} \end{equation} By (\ref{3a}) and (\ref{3b}), we get the proposition. \end{proof} Based on Proposition \ref{p3a}, Grosjean showed Theorem \ref{grosjean}. Assuming more strong condition on eigenvalues, we remove the assumption that the manifold is simply connected from Theorem \ref{grosjean}. \begin{Cor}\label{p3d2} Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold. Assume that $\Ric\geq (n-p-1)g$ and there exists a non-trivial parallel $p$-form on $M$ $(2\leq p< n/2)$. If $ \lambda_{n-p+1}(g)= n-p, $ then $(M,g)$ is isometric to a product $S^{n-p}\times (X,g')$, where $(X,g')$ is some $p$-dimensional closed Riemannian manifold. \end{Cor} \begin{proof} Let $f_k$ be the $k$-th eigenfunction of the Laplacian on $S^{n-p}$. Note that the functions $f_1,\ldots,f_{n-p+1}$ are height functions. By Theorem \ref{grosjean}, the universal cover $(\widetilde{M},\tilde{g})$ of $(M,g)$ is isometric to a product $S^{n-p}\times (X,g')$, where $(X,g')$ is some $p$-dimensional closed Riemannian manifold. We regard the function $f_i$ as a function on $\widetilde{M}$. Since $\lambda_{n-p+1}(g)= n-p$, each $f_i\in C^\infty(\widetilde{M})$ ($i=1,\ldots,n-p+1$) is a pull back of some function on $M$. Thus, the covering transformation preserves $f_1,\ldots,f_{n-p+1}$. Therefore, the covering transformation does not act on $S^{n-p}$, and so we get the corollary. \end{proof} The almost version of this corollary is Main Theorem 2. Finally, we show that the assumption of Corollary \ref{p3d2} is optimal in some sense by giving an example. Take a positive odd integer $p$ with $p\geq 3$ and a positive integer $n$ with $n> 2p$. Put $a:=\sqrt{(p-1)/(n-p-1)}$. We define an equivalence relation $\sim$ on $S^{n-p}\times S^p(a)$ as follows: \begin{equation*} \begin{split} &((x_0,\ldots,x_{n-p}),(y_0,\ldots,y_p))\sim ((x'_0,\ldots,x'_{n-p}),(y'_0,\ldots,y'_p))\\ \Leftrightarrow &\text{ there exists $k\in \mathbb{Z}$ such that}\\ &((x'_0,\ldots,x'_{n-p}),(y'_0,\ldots,y'_p))=(((-1)^k x_0, x_1,\ldots,x_{n-p}),(-1)^k(y_0,\ldots,y_p)) \end{split} \end{equation*} for any $((x_0,\ldots,x_{n-p}),(y_0,\ldots,y_p)), ((x'_0,\ldots,x'_{n-p}),(y'_0,\ldots,y'_p))\in S^{n-p}\times S^p(a)$. Then, we have the following: \begin{Prop}\label{p3e} We have the following properties: \begin{itemize} \item $(M,g)=(S^{n-p}\times S^p(a))/\sim$ is an $n$-dimensional closed Riemannian manifold with a non-trivial parallel $p$-form. \item $\Ric= (n-p-1)g$. \item $\lambda_{n-p}(g)=n-p$. \item $(M,g)$ is not isometric to any product Riemannian manifolds. \end{itemize} \end{Prop} \begin{proof} Let $\omega$ be the volume form on $S^p(a)$. Since the action on $S^{n-p}\times S^p(a)$ preserves $\omega$, there exists a non-trivial parallel $p$-form on $(M,g)$. We also denote it by $\omega$. Since the action on $S^{n-p}\times S^p(a)$ preserves the function $$ x_i \colon S^{n-p}\times S^p(a)\to \mathbb{R},\,((x_0,\ldots,x_{n-p}),(y_0,\ldots,y_p))\mapsto x_i $$ for each $i=1,\ldots,n-p$, we have $\lambda_{n-p}(g)=n-p$. Suppose that $(M,g)$ is isometric to a product $(M^{n-k}_1,g_1)\times (M^{k}_2,g_2)$ ($k\leq n-k$) for some $(n-k)$ and $k$-dimensional closed Riemannian manifolds $(M_1,g_1)$ and $(M_2,g_2)$. Since we have the irreducible decomposition $T_{[(x,y)]} M\cong T_x S^{n-p}\oplus T_y S^p(a)$ of the restricted holonomy action, we get $k=p$. Since $\lambda_1(g)=n-p$, we have that $(M_1,g_1)$ is isometric to $S^{n-p}$. Thus, we get $\lambda_{n-p+1}(g)=n-p$. However the action on $S^{n-p}\times S^p(a)$ does not preserve the function $$ x_0\colon S^{n-p}\times S^p(a)\to \mathbb{R},\,((x_0,\ldots,x_{n-p}),(y_0,\ldots,y_p))\mapsto x_0, $$ and so $\lambda_{n-p+1}(g)\neq n-p$. This is a contradiction. \end{proof} \subsection{Error Estimates} In this subsection, we give error estimates about Proposition \ref{p3a}. Lemma \ref{p4e} (vii) corresponds to Proposition \ref{p3a}. We list the assumptions of this subsection. \begin{Asu} In this subsection, we assume the following: \begin{itemize} \item $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq -Kg$ and $\diam(M)\leq D$ for some positive real numbers $K>0$ and $D>0$. \item $1\leq k \leq n-1$. \item A $k$-form $\omega\in \Gamma(\bigwedge^k T^\ast M)$ satisfies $\|\omega\|_2=1$, $\|\omega\|_\infty\leq L_1$ and $\|\nabla \omega\|_2^2\leq \lambda$ for some $L_1>0$ and $0\leq \lambda\leq 1$. \item A function $f\in C^\infty(M)$ satisfies $\|f\|_{\infty}\leq L_2\|f\|_2$, $\|\nabla f\|_{\infty}\leq L_2\|f\|_2$ and $\|\Delta f\|_2\leq L_2\|f\|_2$ for some $L_2>0$. \end{itemize} \end{Asu} Note that we have \begin{equation}\label{4a0} \|\nabla^2 f\|_2^2=\|\Delta f\|_2^2-\frac{1}{\Vol(M)}\int_M \Ric(\nabla f,\nabla f)\,d\mu_g\leq (1+K)L^2_2\|f\|_2^2 \end{equation} by the Bochner formula. We first show the following: \begin{Lem}\label{p4c} There exists a positive constant $C(n,K,D)>0$ such that $\||\omega|-1\|_2\leq C \lambda^{1/2}$ holds. \end{Lem} \begin{proof} Put $ \overline{\omega}:=\int_M |\omega| \,d\mu_g/\Vol(M). $ Since we have $|\omega|\in W^{1,2}(M)$, we get $$ \||\omega|-\overline{\omega}\|_2^2\leq \frac{1}{\lambda_1(g)}\|\nabla|\omega|\|_2^2\leq \frac{1}{\lambda_1(g)}\|\nabla\omega\|_2^2\leq\frac{\lambda}{\lambda_1(g)}\leq C\lambda $$ by the Kato inequality and the Li-Yau estimate \cite[p.116]{SY}. Therefore, we get $$ |1-\overline{\omega}|=\left|\|\omega\|_2-\|\overline{\omega}\|_2\right|\leq \||\omega|-\overline{\omega}\|_2\leq C\lambda^{1/2}, $$ and so $ \||\omega|-1\|_2\leq C\lambda^{1/2}. $ \end{proof} Let us give error estimates about Proposition \ref{p3a}. \begin{Lem}\label{p4d} There exists a positive constant $C=C(n,k,K,D,L_1,L_2)>0$ such that the following properties hold: \begin{itemize} \item[(i)] We have $$ \frac{1}{\Vol(M)} \int_M |d^{\ast}(\iota(\nabla f)\omega)|^2\,d\mu_g \leq C\|f\|_2^2\lambda. $$ \item[(ii)] We have $$ \left|\frac{1}{\Vol(M)}\int_M \Big(\langle \iota(\Ric(\nabla f))\omega,\iota(\nabla f)\omega\rangle -\langle \mathcal{R}_{k-1}(\iota(\nabla f)\omega),\iota(\nabla f)\omega\rangle \Big)\,d\mu_g\right| \leq C\|f\|_2^2\lambda^{1/2}. $$ \item[(iii)] We have $$ \left|\frac{1}{\Vol(M)} \int_M \Big(\langle\Delta(\iota(\nabla f)\omega),\iota(\nabla f)\omega\rangle -\langle \iota(\nabla \Delta f)\omega,\iota(\nabla f)\omega\rangle\Big) \,d\mu_g\right| \leq C\|f\|_2^2\lambda^{1/2}. $$ \item[(iv)] We have $$ \frac{1}{\Vol(M)}\int_M \left|\nabla (\iota(\nabla f)\omega)-\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega\right|^2 \,d\mu_g\leq C\|f\|_2^2\lambda. $$ \item[(v)] We have $$ \frac{1}{\Vol(M)}\int_M \left|d (\iota(\nabla f)\omega)-\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\omega\right|^2 \,d\mu_g\leq C\|f\|_2^2\lambda. $$ \item[(vi)] We have $$ \frac{1}{\Vol(M)}\int_M |\nabla (\iota(\nabla f)\omega) |^2\,d\mu_g\leq C\|f\|_2^2. $$ \item[(vii)] We have \begin{align*} &\Bigg|\frac{1}{\Vol(M)}\int_M \langle \iota(\Ric(\nabla f))\omega,\iota(\nabla f)\omega\rangle \,d\mu_g\\ &\quad- \frac{k-1}{k} \frac{1}{\Vol(M)}\int_M \langle \iota(\nabla\Delta f)\omega,\iota(\nabla f)\omega\rangle \,d\mu_g+\|T(\iota(\nabla f)\omega)\|_2^2\Bigg| \leq C\|f\|_2^2\lambda^{1/2}. \end{align*} \item[(viii)] If $M$ is oriented and $1\leq k\leq n/2$, then we have \begin{align*} &\frac{1}{\Vol(M)}\int_M \Ric(\nabla f,\nabla f)|\omega|^2\,d\mu_g\\ \leq & \frac{n-k-1}{n-k} \frac{1}{\Vol(M)}\int_M \langle \nabla\Delta f,\nabla f\rangle|\omega|^2 \,d\mu_g -\|T(\iota(\nabla f)\omega)\|_2^2 -\|T(\iota(\nabla f)\ast\omega)\|_2^2\\ &\qquad\qquad -\left(\frac{n-k-1}{n-k} -\frac{k-1}{k} \right)\|d(\iota(\nabla f)\omega)\|^2_2 +C\|f\|_2^2\lambda^{1/2}. \end{align*} \end{itemize} Although an orthonormal basis $\{e_1,\ldots,e_n\}$ of $TM$ is defined only locally, $\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega$ and $\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\omega$ are well-defined as tensors. \end{Lem} \begin{proof} We first prove (i). Since $d^\ast (f\omega)=-\iota(\nabla f)\omega +f d^\ast \omega$ and $d^\ast\circ d^\ast=0$, we have $ d^\ast (\iota(\nabla f)\omega)=-\iota(\nabla f)d^\ast \omega. $ Thus, we get $$ \frac{1}{\Vol(M)} \int_M |d^{\ast}(\iota(\nabla f)\omega)|^2\,d\mu_g \leq C\|\nabla f\|_{\infty}^2 \|\nabla \omega\|_2^2 \leq C\|f\|_2^2 \lambda. $$ To prove (ii) and (iii), we estimate following terms: \begin{align*} &\frac{1}{\Vol(M)} \int_M \langle\iota(\nabla f)\Delta \omega,\iota(\nabla f) \omega\rangle\,d\mu_g,\\ &\frac{1}{\Vol(M)} \int_M \langle\iota(\nabla f)\nabla^\ast \nabla \omega,\iota(\nabla f) \omega\rangle\,d\mu_g,\\ &\frac{1}{\Vol(M)} \int_M \langle \iota(\nabla f) \mathcal{R}_k \omega, \iota(\nabla f)\omega\rangle\,d\mu_g,\\ &\frac{1}{\Vol(M)} \int_M \langle \sum_{i=1}^n\iota(\nabla_{e_i}\nabla f) (\nabla_{e_i}\omega),\iota(\nabla f)\omega\rangle\,d\mu_g,\\ &\frac{1}{\Vol(M)} \int_M \langle\sum_{i=1}^n\iota(e_i)(R(\nabla f,e_i)\omega),\iota(\nabla f) \omega\rangle\,d\mu_g. \end{align*} We have \begin{align*} &\int_M \langle\iota(\nabla f)\Delta \omega,\iota(\nabla f) \omega\rangle\,d\mu_g\\ =&\int_M \langle d \omega,d (d f \wedge\iota(\nabla f) \omega)\rangle\,d\mu_g+\int_M \langle d^\ast\omega,d^\ast(d f \wedge\iota(\nabla f) \omega)\rangle\,d\mu_g \end{align*} and \begin{align*} &|\langle d \omega,d (d f \wedge\iota(\nabla f) \omega)\rangle|\\ =&|\langle d \omega, \sum_{i=1}^n d f\wedge e^i \wedge\left(\iota(\nabla_{e_i}\nabla f) \omega+\iota(\nabla f) \nabla_{e_i}\omega \right)\rangle|\\ \leq& C|\nabla \omega||\nabla f|(|\nabla^2 f||\omega|+|\nabla f||\nabla \omega|),\\ &|\langle d^\ast \omega,d^\ast (d f \wedge\iota(\nabla f) \omega)\rangle|\\ =&|\langle d^\ast \omega, \sum_{i=1}^n \iota(e_i)\left( \nabla_{e_i} d f\wedge \iota(\nabla f) \omega+ d f\wedge \iota(\nabla_{e_i} \nabla f) \omega+ d f\wedge\iota(\nabla f) \nabla_{e_i}\omega \right)\rangle|\\ \leq& C|\nabla \omega||\nabla f|(|\nabla^2 f||\omega|+|\nabla f||\nabla \omega|). \end{align*} Thus, we get \begin{equation}\label{4a} \left|\frac{1}{\Vol(M)} \int_M \langle\iota(\nabla f)\Delta \omega,\iota(\nabla f) \omega\rangle\,d\mu_g\right| \leq C\|f\|_2^2\lambda^{1/2}. \end{equation} We have \begin{align*} \int_M \langle\iota(\nabla f)\nabla^\ast\nabla \omega,\iota(\nabla f) \omega\rangle\,d\mu_g =\int_M \langle \nabla \omega,\nabla (d f \wedge\iota(\nabla f) \omega)\rangle\,d\mu_g \end{align*} and $ |\langle \nabla \omega,\nabla (d f \wedge\iota(\nabla f) \omega)\rangle| \leq C|\nabla \omega||\nabla f|(|\nabla^2 f||\omega|+|\nabla f||\nabla \omega|). $ Thus, we get \begin{equation}\label{4aa} \left|\frac{1}{\Vol(M)} \int_M \langle\iota(\nabla f)\nabla^\ast\nabla \omega,\iota(\nabla f) \omega\rangle\,d\mu_g\right| \leq C\|f\|_2^2\lambda^{1/2}. \end{equation} By Theorem \ref{p2b}, (\ref{4a}) and (\ref{4aa}), we have \begin{equation}\label{4b} \begin{split} &\left|\frac{1}{\Vol(M)} \int_M \langle \iota(\nabla f) \mathcal{R}_k \omega, \iota(\nabla f)\omega\rangle\,d\mu_g\right|\\ \leq & \frac{1}{\Vol(M)}\left(\left|\int_M \langle \iota(\nabla f)\Delta \omega, \iota(\nabla f) \omega\rangle\,d\mu_g\right|+ \left|\int_M \langle \iota(\nabla f)\nabla^\ast\nabla \omega, \iota(\nabla f)\omega\rangle\,d\mu_g\right|\right)\\ \leq & C\|f\|_2^2\lambda^{1/2}. \end{split} \end{equation} Since $ |\langle \sum_{i=1}^n\iota(\nabla_{e_i}\nabla f) (\nabla_{e_i}\omega),\iota(\nabla f)\omega\rangle| \leq C|\omega||\nabla f| |\nabla \omega||\nabla^2 f|, $ we have \begin{equation}\label{4c} \left|\frac{1}{\Vol(M)} \int_M \langle \sum_{i=1}^n\iota(\nabla_{e_i}\nabla f) (\nabla_{e_i}\omega),\iota(\nabla f)\omega\rangle\,d\mu_g\right| \leq C \|f\|_2^2\lambda^{1/2}. \end{equation} Let us estimate \begin{equation*} \frac{1}{\Vol(M)} \int_M \langle\sum_{i=1}^n\iota(e_i)(R(\nabla f,e_i)\omega),\iota(\nabla f) \omega\rangle\,d\mu_g. \end{equation*} We have \begin{equation*} \begin{split} \left|\frac{1}{\Vol(M)} \int_M \langle \nabla_{\nabla f} d^\ast \omega,\iota(\nabla f) \omega\rangle\,d\mu_g\right|& =\left|\frac{1}{\Vol(M)} \int_M \langle d^\ast \omega,\nabla^\ast (d f\otimes\iota(\nabla f) \omega)\rangle\,d\mu_g\right|\\ &\leq C\|f\|_2^2\lambda^{1/2},\\ \left|\frac{1}{\Vol(M)} \int_M \langle d^\ast \nabla_{\nabla f}\omega,\iota(\nabla f) \omega\rangle\,d\mu_g\right|& =\left|\frac{1}{\Vol(M)} \int_M \langle \nabla \omega, d f \otimes d (\iota(\nabla f) \omega)\rangle\,d\mu_g\right|\\ &\leq C\|f\|_2^2\lambda^{1/2} \end{split} \end{equation*} and $$\left|\frac{1}{\Vol(M)} \int_M \langle \sum_{i,j=1}^n \langle \nabla_{e_j} \nabla f, e_i\rangle\iota(e_j)\nabla_{e_i}\omega,\iota(\nabla f)\omega\rangle\,d\mu_g\right|\\ \leq C\|f\|_2^2\lambda^{1/2}.$$ Thus, by Lemma \ref{p2c} (iii), we get \begin{equation}\label{4d} \left|\frac{1}{\Vol(M)} \int_M \langle\sum_{i=1}^n\iota(e_i)(R(\nabla f,e_i)\omega),\iota(\nabla f) \omega\rangle\,d\mu_g\right| \leq C\|f\|_2^2\lambda^{1/2}. \end{equation} By (\ref{4a}), (\ref{4b}), (\ref{4c}), (\ref{4d}) and Lemma \ref{p2c}, we get (ii) and (iii). Since $ \nabla (\iota(\nabla f)\omega)-\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega =\sum_{i=1}^n e^i\otimes\iota(\nabla f)\nabla_{e_i}\omega, $ we get (iv) and (vi). Since $ d (\iota(\nabla f)\omega)-\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\omega =\sum_{i=1}^n e^i\wedge\iota(\nabla f)\nabla_{e_i}\omega, $ we get (v). By Theorem \ref{p2b} and (\ref{2b}), we have \begin{equation*} \begin{split} &\frac{1}{\Vol(M)}\int_M \langle \mathcal{R}_{k-1}(\iota(\nabla f)\omega),\iota(\nabla f)\omega\rangle \,d\mu_g\\ =&\frac{1}{\Vol(M)}\int_M \langle (\Delta-\nabla^\ast\nabla)(\iota(\nabla f)\omega),\iota(\nabla f)\omega\rangle \,d\mu_g\\ =& \frac{k-1}{k} \frac{1}{\Vol(M)}\int_M \langle \Delta (\iota(\nabla f)\omega), \iota(\nabla f)\omega\rangle\,d\mu_g\\ &+\left(\frac{n-k+1}{n-k+2}-\frac{k-1}{k} \right)\|d^\ast(\iota(\nabla f)\omega)\|_2^2-\|T(\iota(\nabla f)\omega)\|_2^2. \end{split} \end{equation*} Thus, by (i), (ii) and (iii), we get (vii) Finally, we prove (viii). Suppose that $M$ is oriented and $1\leq k\leq n/2$. Since $\nabla (\ast \omega)=\ast\nabla \omega$, we have \begin{align*} &\frac{1}{\Vol(M)}\int_M \langle \iota(\Ric(\nabla f))\ast\omega,\iota(\nabla f)\ast\omega\rangle \,d\mu_g\\ \leq & \frac{n-k-1}{n-k} \frac{1}{\Vol(M)}\int_M \langle \iota(\nabla\Delta f)\ast\omega,\iota(\nabla f)\ast\omega\rangle \,d\mu_g-\|T(\iota(\nabla f)\ast\omega)\|_2^2+C\|f\|_2^2\lambda^{1/2} \end{align*} by (vii). Thus, by (\ref{hstar}), (i), (iii) and (vii), we get \begin{align*} &\frac{1}{\Vol(M)}\int_M \Ric(\nabla f,\nabla f)|\omega|^2\,d\mu_g\\ \leq & \frac{n-k-1}{n-k} \frac{1}{\Vol(M)}\int_M \langle \nabla\Delta f,\nabla f\rangle|\omega|^2 \,d\mu_g -\|T(\iota(\nabla f)\omega)\|_2^2 -\|T(\iota(\nabla f)\ast\omega)\|_2^2\\ &\qquad -\left(\frac{n-k-1}{n-k} -\frac{k-1}{k} \right) \frac{1}{\Vol(M)}\int_M \langle \iota(\nabla\Delta f)\omega,\iota(\nabla f)\omega\rangle \,d\mu_g +C\|f\|_2^2\lambda^{1/2}\\ \leq & \frac{n-k-1}{n-k} \frac{1}{\Vol(M)}\int_M \langle \nabla\Delta f,\nabla f\rangle|\omega|^2 \,d\mu_g -\|T(\iota(\nabla f)\omega)\|_2^2 -\|T(\iota(\nabla f)\ast\omega)\|_2^2\\ &\qquad\qquad -\left(\frac{n-k-1}{n-k} -\frac{k-1}{k} \right)\|d(\iota(\nabla f)\omega)\|^2_2 +C\|f\|_2^2\lambda^{1/2}. \end{align*} This gives (viii). \end{proof} \subsection{Eigenvalue Estimate} In this subsection, we complete the proofs of Main Theorems 1 and 3. Recall that $\lambda_1(\Delta_{C,p})$ denotes the first eigenvalue of the connection Laplacian $\Delta_{C,p}$ acting on $p$-forms: $$\Delta_{C,p}:=\nabla^\ast\nabla \colon \Gamma(\bigwedge^p T^\ast M)\to\Gamma(\bigwedge^p T^\ast M).$$ It is enough to show Main Theorem 1 when $\lambda_1(\Delta_{C,p})\leq 1$. Note that we always have $ \lambda_1(\Delta_{C,1})\geq 1 $ if $\Ric_g\geq (n-1)g$. We need the following $L^\infty$ estimates. \begin{Lem}\label{Linfes} Take an integer $n\geq 2$ and positive real numbers $K>0$, $D>0$, $\Lambda>0$. Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold with $\Ric\geq-Kg$ and $\diam(M)\leq D$. Then, we have the following: \begin{itemize} \item[(i)] For any function $f\in C^\infty(M)$ and any $\lambda\geq 0$ with $\Delta f=\lambda f$ and $\lambda\leq \Lambda$, then we have $\|\nabla f\|_\infty\leq C(n,K,D,\Lambda)\|f\|_2$ and $\|f\|_\infty\leq C(n,K,D,\Lambda)\|f\|_2$. \item[(ii)] For any $p$-form $\omega\in \Gamma\left(\bigwedge^p T^\ast M\right)$ and any $\lambda\geq 0$ with $\Delta_{C,p} \omega=\lambda \omega$ and $\lambda\leq \Lambda$, then we have $\|\omega\|_\infty\leq C(n,K,D,\Lambda)\|\omega\|_2$. \end{itemize} \end{Lem} \begin{proof} By the gradient estimate for eigenfunctions \cite[Theorem 7.3]{Pe1}, we get (i). Let us show (ii). Since we have \begin{equation*} \Delta |\omega|^2=2\langle \Delta_{C,p} \omega, \omega \rangle-2|\nabla \omega|^2\leq 2 \Lambda |\omega|^2, \end{equation*} we get $\|\omega\|_\infty\leq C$ by \cite[Proposition 9.2.7]{Pe3} (see also Propositions 7.1.13 and 7.1.17 in \cite{Pe3}). Note that our sign convention of the Laplacian is different from \cite{Pe3}. \end{proof} We use the following proposition not only for the proofs of Main Theorems 1 and 3 but also for other main theorems. \begin{Prop}\label{p4e} For given integers $n\geq 4$ and $2\leq p \leq n/2$, there exists a constant $C(n,p)>0$ such that the following property holds. Let $(M,g)$ be an $n$-dimensional closed oriented Riemannian manifold with $\Ric_g\geq (n-p-1)g$. Suppose that an integer $i\in\mathbb{Z}_{>0}$ satisfies $\lambda_i(g)\leq n-p+1$, and there exists an eigenform $\omega$ of the connection Laplacian $\Delta_{C,p}$ acting on $p$-forms with $\|\omega\|_2=1$ corresponding to the eigenvalue $\lambda$ with $0\leq \lambda\leq 1$. Then, we have \begin{align*} &\frac{n-p-1}{n-p}\lambda_i(g)\left(\lambda_i(g)-(n-p)\right)\|f_i\|^2\\ \geq&\|T(\iota(\nabla f_i)\omega)\|_2^2 +\|T(\iota(\nabla f_i)\ast\omega)\|_2^2\\ &+\left(\frac{n-p-1}{n-p} -\frac{p-1}{p} \right)\|d(\iota(\nabla f_i)\omega)\|^2_2 -C\lambda^{1/2}\|f_i\|_2^2, \end{align*} where $f_i$ denotes the $i$-th eigenfunction of the Laplacian acting on functions. \end{Prop} \begin{proof} By Lemma \ref{p4d} (viii), we have \begin{align*} &\frac{n-p-1}{\Vol(M)}\int_M \langle\nabla f_i,\nabla f_i\rangle|\omega|^2\,d\mu_g\\ \leq&\frac{1}{\Vol(M)}\int_M \Ric(\nabla f_i,\nabla f_i)|\omega|^2\,d\mu_g\\ \leq&\frac{n-p-1}{n-p}\frac{\lambda_i(g)}{\Vol(M)}\int_M \langle\nabla f_i,\nabla f_i\rangle|\omega|^2\,d\mu_g -\|T(\iota(\nabla f_i)\omega)\|_2^2 -\|T(\iota(\nabla f_i)\ast\omega)\|_2^2\\ &\qquad\qquad -\left(\frac{n-p-1}{n-p} -\frac{p-1}{p} \right)\|d(\iota(\nabla f_i)\omega)\|^2_2 +C\lambda^{1/2}\|f_i\|_2^2. \end{align*} Thus, we get the proposition by Lemma \ref{p4c}. \end{proof} \begin{proof}[Proof of Main Theorem 1] If $M$ is orientable, we get the theorem immediately by Proposition \ref{p4e}. If $M$ is not orientable, we get the theorem by considering the two-sheeted orientable Riemannian covering $\pi\colon (\widetilde{M},\tilde{g})\to (M,g)$ because we have $ \lambda_1(g)\geq\lambda_1(\tilde{g}) $ and $ \lambda_1(\Delta_{C,p},g)\geq \lambda_1(\Delta_{C,p},\tilde{g}). $ \end{proof} Similarly, we get Main Theorem 3 because $\lambda_1(\Delta_{C,p},g)=\lambda_1(\Delta_{C,n-p},g)$ holds if the manifold is orientable. \section{Pinching} In this section, we show the remaining main theorems. Main Theorem 2 is proved in subsection 4.5 except for the orientability, and the orientability is proved in subsection 4.7. Main Theorem 4 is proved in subsection 4.8. We list assumptions of this section. \begin{Asu}\label{asu1} Throughout in this section, we assume the following: \begin{itemize} \item $n\geq 5$, $2\leq p < n/2$ and $1\leq k\leq n-p+1$. \item $(M,g)$ is an $n$-dimensional closed Riemannian manifold with $\Ric_g\geq (n-p-1)g$. \item $C=C(n,p)>0$ denotes a positive constant depending only on $n$ and $p$. \item $\delta>0$ satisfies $\delta\leq \delta_0$ for sufficiently small $\delta_0=\delta_0(n,p)>0$. \item $f_i\in C^\infty(M)$ ($i\in\{1,\ldots,k\}$) is an eigenfunction of the Laplacian acting on functions with $\|f_i\|_2^2=1/(n-p+1)$ corresponding to the eigenvalue $\lambda_i$ with $0<\lambda_i\leq n-p+\delta$ such that $$ \int_M f_i f_j\,d\mu_g=0 $$ holds for any $i\neq j$. \end{itemize} \end{Asu} Note that, for given real numbers $a,b$ with $0<b<a$ and a positive constant $C>0$, we can assume that $ C \delta^a\leq\delta^b. $ At the beginning of each subsections, we add either one of the following assumptions if necessary. \begin{Asu}\label{aspform} There exists an eigenform $\omega\in\Gamma(\bigwedge^p T^\ast M)$ of the connection Laplacian $\Delta_{C,p}$ with $\|\omega\|_2=1$ corresponding to the eigenvalue $\lambda$ with $0\leq \lambda \leq \delta$. \end{Asu} \begin{Asu}\label{asn-pform} There exists an eigenform $\xi\in\Gamma(\bigwedge^{n-p} T^\ast M)$ of the connection Laplacian $\Delta_{C,n-p}$ with $\|\xi\|_2=1$ corresponding to the eigenvalue $\lambda$ with $0\leq \lambda \leq \delta$. \end{Asu} Under our assumptions, we have $\|\omega\|_\infty\leq C$, $\|\xi\|_{\infty} \leq C$, $\|f_i\|_\infty \leq C $ and $\|\nabla f_i\|_\infty \leq C$ for all $i$ by Lemma \ref{Linfes}. By Main Theorems 1 and 3, we have $\lambda_i\geq n-p-C(n,p)\delta^{1/2}$ for all $i$. Note that we do not assume that $\lambda_i=\lambda_i(g)$. \subsection{Useful Techniques} In this subsection, we list some useful techniques for our pinching problems. Although we suppose that Assumption \ref{asu1} holds, most assertions hold under weaker assumptions. The following lemma is a variation of the Cheng-Yau estimate. See \cite[Lemma 2.10]{Ai2} for the proof (see also \cite[Theorem 7.1]{Ch}). \begin{Lem}\label{chya} Take a positive real number $0<\epsilon_1 \leq1$. For any function $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_k\}$ and any point $x\in M$, we have \begin{equation*} |\nabla f|^2(x)\leq \frac{C}{\epsilon_1}\left(f(p)-f(x)+\epsilon_1\|f\|_2\right)^2, \end{equation*} where $p\in M$ denotes a maximum point of $f$. \end{Lem} The following theorem is an easy consequence of the Bishop-Gromov inequality. \begin{Thm}\label{bigr} For any $p\in M$ and $0<r\leq \diam(M)+1$, we have $r^n \Vol(M)\leq C\Vol(B_r(p))$. \end{Thm} The following theorem is due to Cheeger-Colding \cite{CC2} (see also \cite[Theorem 7.1.10]{Pe3}). By this theorem, we get integral pinching conditions along the geodesics under the integral pinching condition for a function on $M$. \begin{Thm}[segment inequality]\label{seg} For any non-negative measurable function $h\colon M\to \mathbb{R}_{\geq 0}$, we have \begin{equation*} \frac{1}{\Vol(M)^2}\int_{M\times M} \frac{1}{d(y_1,y_2)}\int_0^{d(y_1,y_2)} h\circ \gamma_{y_1,y_2}(s) \,dsdy_1dy_2\leq \frac{C}{\Vol(M)}\int_M h\,d\mu_g. \end{equation*} \end{Thm} \begin{Rem} The book \cite{Pe3} deals with the segment $c_{y_1,y_2}\colon[0,1]\to M$ for each $y_1,y_2\in M$, defined to be $c_{y_1,y_2}(0)=y_1$, $c_{x,y}(1)=y_2$ and $\nabla_{\partial /\partial t} \dot{c}=0$. We have $c_{x,y}(t)=\gamma_{x,y}(t d(x,y))$ for all $t\in[0,1]$ and $$ d(y_1,y_2)\int_0^1 h\circ c_{y_1,y_2}(t) \,d t=\int_0^{d(y_1,y_2)} h\circ \gamma_{y_1,y_2}(s) \,d s. $$ \end{Rem} After getting integral pinching conditions along the geodesics, we use the following lemma to get $L^\infty$ error estimate along them. The proof is standard (c.f. \cite[Lemma 2.41]{CC2}). \begin{Lem}\label{trif} Take positive real numbers $l,\epsilon>0$ and a non-negative real number $r\geq 0$. Suppose that a smooth function $u\colon [0,l]\to \mathbb{R}$ satisfies $$\int_0^l |u''(t)+r^2 u(t)| \,dt\leq\epsilon.$$ Then, we have \begin{equation*} \begin{split} \left|u(t)-u(0) \cos r t- \frac{u'(0)}{r} \sin r t\right|&\leq \epsilon\frac{\sinh rt}{r},\\ \left|u'(t)+ r u(0)\sin r t- u'(0)\cos r t\right|&\leq \epsilon+\int_0^t\left|u(s)-u(0)\cos r s-\frac{u'(0)}{r}\sin r s\right|\,ds, \end{split} \end{equation*} for all $t\in [0,l]$, where we defined $ \frac{1}{r}\sin r t:=t,$ $\frac{1}{r}\sinh r t:=t $ if $r=0$. \end{Lem} The following lemma is standard. \begin{Lem}\label{cosi} For all $t\in \mathbb{R}$, we have \begin{equation*} 1-\frac{1}{2}t^2\leq \cos t\leq 1-\frac{1}{2}t^2+\frac{1}{24}t^4. \end{equation*} For any $t\in [-\pi,\pi]$, we have $\cos t\leq 1-\frac{1}{9}t^2$, and so $|t|\leq3(1-\cos t)^{1/2}$. For any $t_1,t_2 \in [0,\pi]$, we have $|t_1-t_2|\leq3|\cos t_1-\cos t_2|^{1/2}$. \end{Lem} Finally, we recall some facts about the geodesic flow. Let $U M$ denotes the sphere bundle defined by $$ U M:=\{u\in TM:|u|=1\}. $$ There exists a natural Riemannian metric $G$ on $UM$, which is the restriction of the Sasaki metric on $TM$ (see \cite[p.55]{Sa}). The Riemannian volume measure $\mu_G$ satisfies $$ \int_{UM} F\,d\mu_G=\int_M \int_{U_p M} F(u)\, d\mu_0(u) \,d\mu_g(p) $$ for any $F\in C^\infty(U M)$, where $\mu_0$ denotes the standard measure on $U_p M\cong S^{n-1}$. The geodesic flow $\phi_t\colon U M\to U M$ ($t\in\mathbb{R}$) is defined by $$ \phi_t(u):=\left.\frac{\partial}{\partial s}\right|_{s=t}\gamma_u (s)\in U_{\gamma_u(t)} M $$ for any $u\in U M$. Though $\phi_t$ does not preserve the metric $G$ in general, it preserves the measure $\mu_G$. This is an easy consequence of \cite[Lemma 4.4]{Sa}, which asserts that the geodesic flow on $T M$ preserve the natural symplectic structure on $T M$. We can easily show the following lemma. \begin{Lem}\label{geofl} For any $f\in C^\infty (M)$ and $l>0$, we have $$ \frac{1}{\Vol(M)}\int_M f \,d\mu_g=\frac{1}{l\Vol(UM)}\int_{UM}\int_0^l f\circ\gamma_u(t)\,d t\,d\mu_G(u). $$ \end{Lem} This kind of lemma was used by Colding \cite{Co1} to prove that the almost equality of the Bishop comparison theorem implies the Gromov-Hausdorff closeness to the standard sphere. \subsection{Estimates for the Segments} In this subsection, we suppose that Assumption \ref{aspform} holds. The goal is to give error estimates along the geodesics. We first list some basic consequences of our pinching condition. \begin{Lem}\label{p5c} For any $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$, we have \begin{itemize} \item[(i)] $\|\iota(\nabla f)\omega\|_2^2\leq C\delta^{1/2}\|f\|_2^2$, \item[(ii)] $\|\nabla(\iota(\nabla f)\omega)\|_2^2\leq C\delta^{1/2}\|f\|_2^2$, \item[(iii)] $\|(|\nabla^2 f|^2-\frac{1}{n-p}|\Delta f|^2)|\omega|^2\|_1\leq C\delta^{1/4}\|f\|_2^2$. \end{itemize} \end{Lem} \begin{proof} It is enough to consider the case when $M$ is orientable. We first assume that $f=f_i$ for some $i=1,\ldots,k$. Then, we have \begin{equation}\label{5a0} \begin{split} &\|d(\iota(\nabla f)\omega)\|^2_2\leq C\delta^{1/2}\|f\|_2^2,\\ &\|d^\ast (\iota(\nabla f)\omega)\|^2_2 \leq C\delta^{1/2}\|f\|_2^2,\quad \|T(\iota(\nabla f)\omega)\|_2^2\leq C\delta^{1/2}\|f\|_2^2,\\ &\|d^\ast (\iota(\nabla f)\ast \omega)\|^2_2 \leq C\delta^{1/2}\|f\|_2^2,\quad \|T(\iota(\nabla f)\ast\omega)\|_2^2 \leq C\delta^{1/2}\|f\|_2^2 \end{split} \end{equation} by Lemma \ref{p4d} (i) and Proposition \ref{p4e}. Thus, by (\ref{2b}), we get \begin{equation}\label{5a} \|\nabla (\iota(\nabla f)\omega)\|^2_2\leq C\delta^{1/2}\|f\|_2^2 \end{equation} and \begin{equation}\label{5b} \|\nabla (\iota(\nabla f)\ast\omega)\|^2_2\leq \frac{1}{n-p} \|d (\iota(\nabla f)\ast \omega)\|^2_2+C\delta^{1/2}\|f\|_2^2. \end{equation} Moreover, by Lemma \ref{p4d} (iii), we have \begin{equation}\label{5c} \begin{split} \|\iota(\nabla f)\omega\|_2^2 =&\frac{1}{\lambda_i}\frac{1}{\Vol(M)}\int_M \langle \iota(\nabla \Delta f)\omega, \iota(\nabla f)\omega\rangle\,d\mu_g\\ \leq& C\|d(\iota(\nabla f)\omega)\|^2_2+C\|d^\ast (\iota(\nabla f)\omega)\|^2_2+C\delta^{1/2}\|f\|_2^2\\ \leq& C\delta^{1/2}\|f\|_2^2. \end{split} \end{equation} For any $f=a_1 f_1+\cdots + a_k f_k\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$, we have (\ref{5a0}), (\ref{5a}), (\ref{5b}), (\ref{5c}). For example, we have \begin{equation*} \|\nabla (\iota(\nabla f)\omega)\|_2\leq\sum_{i=1}^k |a_k|\|\nabla (\iota(\nabla f_i)\omega)\|_2\leq C\delta^{1/4}\sum_{i=1}^k |a_k|\|f_i\|_2\leq C\delta^{1/4}\|f\|_2. \end{equation*} Thus, we get (i) and (ii) by (\ref{5a}) and (\ref{5c}). Finally, we prove (iii). Take arbitrary $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$. We have \begin{equation}\label{5ca} \begin{split} &\left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\ast\omega\right|^2\\ =&\sum_{i=1}^n \langle \iota(\nabla_{e_i} \nabla f)\ast\omega,\iota(\nabla_{e_i} \nabla f)\ast\omega\rangle =|\nabla^2 f|^2|\omega|^2-\left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega\right|^2. \end{split} \end{equation} Thus, we have \begin{equation*} \begin{split} &\frac{1}{\Vol(M)}\int_M \left||\nabla(\iota(\nabla f)\ast \omega)|^2-|\nabla^2 f|^2|\omega|^2\right|\,d\mu_g\\ \leq &\frac{1}{\Vol(M)}\int_M \left||\nabla(\iota(\nabla f)\ast \omega)|^2-\left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\ast\omega\right|^2\right| \,d\mu_g\\ &\qquad+\frac{1}{\Vol(M)}\int_M \left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega\right|^2\,d\mu_g, \end{split} \end{equation*} and so we get \begin{equation}\label{5d} \frac{1}{\Vol(M)}\int_M \left||\nabla(\iota(\nabla f)\ast \omega)|^2-|\nabla^2 f|^2|\omega|^2\right|\,d\mu_g \leq C\delta^{1/2}\|f\|_2^2 \end{equation} by (ii) and Lemma \ref{p4d} (iv) and (vi). We have \begin{equation}\label{5e} \begin{split} &\left|\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\ast\omega\right|^2\\ =&\sum_{i=1}^n |\iota(\nabla_{e_i} \nabla f)\ast\omega|^2-\sum_{i,j=1}^n \langle \iota(e_i)\iota(\nabla_{e_j}\nabla f)\ast \omega, \iota(e_j)\iota(\nabla_{e_i}\nabla f)\ast \omega \rangle\\ =&|\nabla^2 f|^2|\omega|^2-\left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega\right|^2\\ &\qquad -\sum_{i,j,k,l=1}^n \nabla^2 f(e_i,e_k)\nabla^2 f(e_j,e_l)\langle e^i\wedge e^l \wedge \omega, e^j\wedge e^k \wedge \omega \rangle \end{split} \end{equation} by (\ref{5ca}) and (\ref{hstar2}). Since \begin{align*} \langle e^i\wedge e^l \wedge \omega, e^j\wedge e^k \wedge \omega \rangle =&(\delta_{i j}\delta_{k l}-\delta_{i k}\delta_{j l})|\omega|^2 -\delta_{i j}\langle \iota(e_k)\omega,\iota(e_l)\omega\rangle\\ &+\delta_{i k}\langle \iota(e_j)\omega,\iota(e_l)\omega\rangle +\langle e^l\wedge \omega,e^j\wedge e^k\wedge\iota(e_i)\omega\rangle, \end{align*} we have \begin{equation}\label{5f} \begin{split} &\sum_{i,j,k,l=1}^n \nabla^2 f(e_i,e_k)\nabla^2 f(e_j,e_l)\langle e^i\wedge e^l \wedge \omega, e^j\wedge e^k \wedge \omega \rangle\\ =&|\nabla^2 f|^2|\omega|^2-(\Delta f)^2|\omega|^2 -\sum_{i=1}^n | \iota(\nabla_{e_i}\nabla f)\omega|^2 -\sum_{i=1}^n\Delta f \langle \iota(\nabla_{e_i}\nabla f)\omega,\iota(e_i)\omega\rangle\\ &\qquad+\sum_{j,k,l=1}^n \nabla^2 f(e_j,e_l)\langle e^l\wedge \omega,e^j\wedge e^k\wedge\iota(\nabla_{e_k} \nabla f)\omega\rangle. \end{split} \end{equation} By (\ref{5e}) and (\ref{5f}), we get \begin{equation*} \begin{split} \left|\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\ast\omega\right|^2 =&(\Delta f)^2|\omega|^2+\sum_{i=1}^n\Delta f \langle \iota(\nabla_{e_i}\nabla f)\omega,\iota(e_i)\omega\rangle\\ -&\sum_{j,k,l=1}^n \nabla^2 f(e_j,e_l)\langle e^l\wedge \omega,e^j\wedge e^k\wedge\iota(\nabla_{e_k} \nabla f)\omega\rangle, \end{split} \end{equation*} and so \begin{equation}\label{5g} \left|\left|\sum_{i=1}^n e^i\wedge \iota(\nabla_{e_i}\nabla f)\ast\omega\right|^2-(\Delta f)^2|\omega|^2\right| \leq C|\nabla^2 f| |\omega|\left|\sum_{i=1}^n e^i\otimes \iota(\nabla_{e_i}\nabla f)\omega\right| \end{equation} By (\ref{5g}), (ii) and Lemma \ref{p4d}, we get \begin{equation}\label{5h} \frac{1}{\Vol(M)}\int_M \left| |d (\iota(\nabla f)\ast\omega)|^2-(\Delta f)^2|\omega|^2\right|\,d\mu_g \leq C\delta^{1/4}\|f\|_2^2. \end{equation} Since we have $ |\nabla (\iota(\nabla f)\ast\omega)|^2\geq |d (\iota(\nabla f)\ast \omega)|^2/(n-p) $ at each point by (\ref{2b}), we get (iii) by (\ref{5b}), (\ref{5d}) and (\ref{5h}). \end{proof} We use the following notation. \begin{notation}\label{np5d} Take $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$ with $\|f\|_2^2=1/(n-p+1)$ and put \begin{align*} h_0&:=|\nabla^2 f|^2, \quad h_1:=||\omega|^2-1|, \quad h_2:=|\nabla \omega|^2,\\ h_3&:=|\iota(\nabla f)\omega |^2, \quad h_4:=|\nabla (\iota(\nabla f)\omega)|^2,\quad h_5:=\left|\sum_{i=1}^n e^i\otimes\iota(\nabla_{e_i}\nabla f)\omega\right|^2\\ h_6&:=\left||\nabla^2 f|^2-\frac{1}{n-p}(\Delta f)^2\right||\omega|^2. \end{align*} For each $y_1\in M$, we define \begin{align*} D_f(y_1):=&\Big\{y_2\in I_{y_1}\setminus\{y_1\}:\frac{1}{d(y_1,y_2)}\int_{0}^{d(y_1,y_2)} h_0\circ \gamma_{y_1,y_2}(s)\,d s\leq \delta^{-1/50} \text{ and}\\ &\qquad \quad\frac{1}{d(y_1,y_2)}\int_{0}^{d(y_1,y_2)} h_i\circ \gamma_{y_1,y_2}(s)\,d s\leq \delta^{1/5} \text{ for all $i=1,\ldots,6$} \Big\},\\ Q_f:=&\{y_1\in M: \Vol(M\setminus D_f(y_1))\leq\delta^{1/100}\Vol(M)\},\\ E_f(y_1):=&\Big\{u\in U_{y_1} M: \frac{1}{\pi }\int_{0}^{\pi} h_0\circ \gamma_u(s)\,d s\leq \delta^{-1/50} \text{ and }\frac{1}{\pi}\int_{0}^{\pi} h_i\circ \gamma_u (s)\,d s\leq \delta^{1/5} \\ &\qquad \quad\qquad \quad\qquad \quad\qquad \quad\qquad \quad\qquad \quad\qquad \quad\text{ for all $i=1,\ldots,6$} \Big\},\\ R_f:=&\{y_1\in M: \Vol(U_{y_1} M\setminus E_f(y_1))\leq\delta^{1/100}\Vol(U_{y_1}M)\}. \end{align*} \end{notation} Now, we use the segment inequality and Lemma \ref{geofl}. We show that we have the integral pinching condition along most geodesics. \begin{Lem}\label{p5d} Take $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$ with $\|f\|_2^2=1/(n-p+1)$. Then, we have the following properties: \begin{itemize} \item[(i)] $\Vol(M\setminus Q_f)\leq C\delta^{1/100}\Vol(M).$ \item[(ii)] $\Vol(M\setminus R_f)\leq C\delta^{1/100}\Vol(M).$ \end{itemize} \end{Lem} \begin{proof} We have $\|h_i\|_1\leq C\delta^{1/4}$ for all $i=1,\ldots,6$ by the assumption, Lemmas \ref{p4c}, \ref{p4d} (iv) and \ref{p5c}, and we have $\|h_0\|_1\leq C$ by (\ref{4a0}). For any $y_1\in M\setminus Q_f$, we have $\Vol(M\setminus D_f(y_1))>\delta^{1/100}\Vol(M)$, and so we have either \begin{equation*} \frac{1}{\Vol(M)}\int_M\frac{1}{d(y_1,y_2)}\int_0^{d(y_1,y_2)}h_0\circ \gamma_{y_1,y_2}(s)\,d s \,d y_2\geq \frac{1}{7}\delta^{-1/100} \end{equation*} or \begin{equation*} \frac{1}{\Vol(M)}\int_M\frac{1}{d(y_1,y_2)}\int_0^{d(y_1,y_2)}h_i\circ \gamma_{y_1,y_2}(s)\,d s \,d y_2\geq\frac{1}{7}\delta^{21/100} \end{equation*} for some $i=1,\ldots,6$. Thus, we get either \begin{equation*} \frac{1}{\Vol(M)}\int_M \int_M\frac{1}{d(y_1,y_2)}\int_0^{d(y_1,y_2)}h_0\circ \gamma_{y_1,y_2}(s)\,d s \,d y_1\,d y_2\geq \frac{1}{49}\delta^{-1/100}\Vol(M\setminus Q_f) \end{equation*} or \begin{equation*} \frac{1}{\Vol(M)}\int_M \int_M\frac{1}{d(y_1,y_2)}\int_0^{d(y_1,y_2)}h_i\circ \gamma_{y_1,y_2}(s)\,d s \,d y_1\,d y_2\geq \frac{1}{49}\delta^{21/100}\Vol(M\setminus Q_f) \end{equation*} for some $i=1,\ldots,6$. Therefore, we get (i) by the segment inequality (Theorem \ref{seg}). Similarly, we get (ii) by Lemma \ref{geofl}. \end{proof} Under the pinching condition along the geodesic, we get the following: \begin{Lem}\label{p5e} Take $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$ with $\|f\|_2^2=1/(n-p+1)$. Suppose that a geodesic $\gamma\colon [0,l]\to M$ satisfies one of the following: \begin{itemize} \item There exist $x\in M$ and $y\in D_f(x)$ such that $l=d(x,y)$ and $\gamma=\gamma_{x,y}$, \item There exist $x\in M$ and $u\in E_f(x)$ such that $l=\pi$ and $\gamma=\gamma_u$. \end{itemize} Then, we have $$ ||\omega|^2(s)-1|\leq C\delta^{1/10},\quad |\iota(\nabla f)\omega|(s)\leq C\delta^{1/10} $$ for all $s\in [0,l]$, and at least one of the following: \begin{itemize} \item[(i)] $\frac{1}{l}\int_0^l|\nabla^2 f|\circ \gamma(s)\,d s\leq C\delta^{1/250}$, \item[(ii)] There exists a parallel orthonormal basis $\{E^1(s),\ldots,E^n(s)\}$ of $T_{\gamma(s)}^\ast M$ along $\gamma$ such that $$ |\omega-E^{n-p+1}\wedge\cdots\wedge E^n|(s)\leq C\delta^{1/25} $$ for all $s\in[0,l]$, and $$ \frac{1}{l}\int_0^l|\nabla^2 f+f\sum_{i=1}^{n-p}E^i\otimes E^i|(s)\, d s\leq C\delta^{1/200}, $$ where we write $|\cdot|(s)$ instead of $|\cdot|\circ\gamma(s)$. \end{itemize} In particular, for both cases, there exists a parallel orthonormal basis $\{E^1(s),\ldots,E^n(s)\}$ of $T_{\gamma(s)}^\ast M$ along $\gamma$ such that $$ \frac{1}{l}\int_0^l|\nabla^2 f+f\sum_{i=1}^{n-p}E^i\otimes E^i|(s)\, d s\leq C\delta^{1/250}. $$ Moreover, if we put $\dot{\gamma}^E:=\sum_{i=1}^{n-p} \langle\dot{\gamma},E_i\rangle E_i,$ where $\{E_1,\ldots,E_n\}$ denotes the dual basis of $\{E^1,\ldots,E^n\}$, then $|\dot{\gamma}^E|$ is constant along $\gamma$, and \begin{equation*} \begin{split} \left|f\circ \gamma(s)-f(\gamma(s_0))\cos (|\dot{\gamma}^E|(s-s_0))-\frac{1}{|\dot{\gamma}^E|}\langle\nabla f,\dot{\gamma}(s_0)\rangle\sin (|\dot{\gamma}^{E}|(s-s_0))\right|&\leq C\delta^{1/250},\\ \left| \langle \nabla f, \dot{\gamma}(s)\rangle+f(\gamma(s_0))|\dot{\gamma}^{E}|\sin (|\dot{\gamma}^{E}|(s-s_0))-\langle\nabla f,\dot{\gamma}(s_0)\rangle\cos (|\dot{\gamma}^{E}|(s-s_0))\right|&\leq C\delta^{1/250} \end{split} \end{equation*} for all $s,s_0\in[0,l]$. \end{Lem} \begin{proof} Let us show the first assertion. Since $\frac{d}{d s}|\omega|^2(s)=2\langle\nabla_{\dot{\gamma}}\omega,\omega\rangle$, we have \begin{align*} \left||\omega|^2(s)-|\omega|^2(0)\right| =&\left|\int_0^s \frac{d}{d s}|\omega|^2(t)\,d t\right|\\ \leq& 2 \left(\int_0^s |\nabla \omega|^2 (t)\,d t\right)^{1/2} \left(\int_0^s |\omega|^2 (t)\, d t\right)^{1/2} \leq C\delta^{1/10} \end{align*} for all $s\in[0,l]$. Since we have $\int_0^l||\omega|^2-1|\, d t \leq \delta^{1/5}$, we get $||\omega|^2(s)-1|\leq C\delta^{1/10}$. In particular, $|\omega|(s)\geq 1/2$, and so \begin{equation}\label{5i} \frac{1}{l}\int_0^l\left||\nabla^2 f|^2-\frac{1}{n-p}(\Delta f)^2\right|(s)\,d s\leq 2\delta^{1/5}. \end{equation} Similarly, we have $|\iota(\nabla f)\omega|(s)\leq C\delta^{1/10}$ for all $s\in [0,l]$. We show the remaining assertions. Put \begin{align*} A_1:=&\left\{s\in [0,l]:\left|\sum_{i=1}^n e^i\otimes\iota(\nabla_{e_i}\nabla f)\omega\right|^2(s)>\delta^{1/10}\right\},\\ A_2:=&\left\{s\in [0,l]:\left||\nabla^2 f|^2-\frac{1}{n-p}(\Delta f)^2\right|(s)>\delta^{1/10}\right\},\\ A_3:=&\left\{s\in [0,l]:|\nabla^2 f|(s)<\delta^{1/250}\right\}. \end{align*} Then, we have $H^1(A_1)\leq \delta^{1/10}l$ and $H^1(A_2)\leq 2\delta^{1/10} l$, where $H^1$ denotes the one dimensional Hausdorff measure. We consider the following two cases: \begin{itemize} \item[(a)] $[0,l]=A_1\cup A_2\cup A_3$, \item[(b)] $[0,l]\neq A_1\cup A_2\cup A_3$. \end{itemize} We first consider the case (a). Since $H^1([0,l]\setminus A_3)\leq 3 \delta^{1/10} l,$ we have \begin{align*} \int_{[0,l]\setminus A_3}|\nabla^2 f|(s)\,d s \leq& \left(\int_{[0,l]\setminus A_3}|\nabla^2 f|^2(s)\,d s\right)^{1/2}H^1 ([0,l]\setminus A_3)^{1/2}\\ \leq &C \delta^{-1/100}\delta^{1/20} l=C\delta^{1/25}l. \end{align*} On the other hand, we have $ \int_{A_3} |\nabla^2 f|(s)\,d s\leq \delta^{1/250} l. $ Therefore, we get (i). Moreover, since $|\Delta f|\leq \sqrt{n}|\nabla^2 f|$ and $\left\|\Delta f-(n-p)f\right\|_{\infty}\leq C\delta^{1/2}$, we get $$ \frac{1}{l}\int_0^l|\nabla^2 f+f\sum_{i=1}^{n-p}E^i\otimes E^i|(s)\, d s\leq C\delta^{1/250}, $$ where $\{E^1(s),\ldots,E^n(s)\}$ is any parallel orthonormal basis of $T_{\gamma(s)}^\ast M$ along $\gamma$. We next consider the case (b). There exists $t\in[0,l]$ such that \begin{align*} \left|\sum_{i=1}^n e^i\otimes\iota(\nabla_{e_i}\nabla f)\omega\right|^2(t)&\leq\delta^{1/10},\\ \left||\nabla^2 f|^2-\frac{1}{n-p}(\Delta f)^2\right|(t)&\leq\delta^{1/10},\quad |\nabla^2 f|(t)\geq\delta^{1/250}. \end{align*} Take an orthonormal basis $\{e_1,\ldots,e_n\}$ of $T_{\gamma(t)}M$ such that $\nabla^2 f(e_i,e_j)=\mu_i\delta_{i j}\, (\mu_i\in\mathbb{R})$ for all $i,j=1,\ldots,n$. Let $\{e^1,\ldots,e^n\}$ be the dual basis of $T_{\gamma(t)}^\ast M$. Then, we have $$ \delta^{1/10}\geq \left|\sum_{i=1}^n e^i\otimes\iota(\nabla_{e_i}\nabla f)\omega\right|^2(t) =\sum_{i=1}^n\mu_i^2 |\iota(e_i)\omega|^2(t). $$ Thus, for each $i=1,\ldots,n$, we have at least one of the following: \begin{itemize} \item[(1)] $|\mu_i|\leq \delta^{1/100}$, \item[(2)] $|\iota(e_i)\omega|(t)\leq \delta^{1/25}$. \end{itemize} Since $|\omega|(t)\geq 1/2$, we have $\Card \{i: |\iota(e_i)\omega|(t)\leq \delta^{1/25}\}\leq n-p,$ and so $\Card \{i: |\mu_i|\leq \delta^{1/100}\}\geq p.$ Therefore, we can assume $|\mu_i|\leq \delta^{1/100}$ for all $i=n-p+1,\ldots, n$. Then, we get \begin{align*} \left| \nabla^2 f+\frac{\Delta f}{n-p}\sum_{i=1}^{n-p} e^i\otimes e^i \right|^2(t) =&|\nabla^2 f|^2(t)+\frac{2}{n-p}(\Delta f)(t)\sum_{i=1}^{n-p}\mu_i +\frac{(\Delta f)^2(t)}{n-p}\\ =&|\nabla^2 f|^2(t)-\frac{(\Delta f)^2(t)}{n-p}-\frac{2}{n-p}(\Delta f)(t)\sum_{i=n-p+1}^{n}\mu_i\\ \leq& C\delta^{1/100}. \end{align*} Putting $e_i\otimes e_i$ into the inside of the left hand side, we get $\left|\mu_i+\Delta f(t)/(n-p)\right|^2\leq C\delta^{1/100}$ for all $i=1,\ldots, n-p$, and so \begin{align*} |\mu_i|\geq \frac{|\Delta f(t)|}{n-p}-C\delta^{1/200} \geq &\left(\frac{|\nabla^2 f|^2(t)-\delta^{1/10}}{n-p}\right)^{1/2}-C\delta^{1/200}\\ \geq &\left(\frac{\delta^{1/125}-\delta^{1/10}}{n-p}\right)^{1/2}-C\delta^{1/200} >\delta^{1/100}. \end{align*} Thus, we have $|\iota(e_i)\omega|(t)\leq \delta^{1/25}$ for all $i=1,\ldots,n-p$. Therefore, we get either $|\omega(t)-e^{n-p+1}\wedge\cdots\wedge e^n|\leq C\delta^{1/25}$ or $|\omega(t)+e^{n-p+1}\wedge\cdots\wedge e^n|\leq C\delta^{1/25}$ by $||\omega|^2(t)-1|\leq C\delta^{1/10}$. We can assume that $|\omega(t)-e^{n-p+1}\wedge\cdots\wedge e^n|\leq C\delta^{1/25}$. Let $\{E_1,\ldots,E_n\}$ be the parallel orthonormal basis of $TM$ along $\gamma$ such that $E_i(t)=e_i$, and let $\{E^1,\ldots,E^n\}$ be its dual. Because \begin{align*} \int_0^l \left|\frac{d}{d s}|\omega-E^{n-p+1}\wedge\cdots \wedge E^n|^2(s)\right|\,d s \leq C\delta^{1/10}, \end{align*} we get $|\omega-E^{n-p+1}\wedge\cdots\wedge E^n|(s)\leq C\delta^{1/25}$ for all $s\in [0,l]$. Thus, we get $|\langle\iota(E_i)\omega,\iota(E_j)\omega\rangle|\leq C\delta^{1/25}$ for all $i=1,\cdots,n$ and $j=1,\ldots,n-p$, and $|\langle\iota(E_i)\omega,\iota(E_j)\omega\rangle-\delta_{i j}|\leq C\delta^{1/25}$ for all $i,j=n-p+1,\cdots,n$. Therefore, we get \begin{align*} &\left| \left|\sum_{i=1}^n E^i\otimes\iota(\nabla_{E_i}\nabla f)\omega\right|^2-\sum_{i=1}^n\sum_{j=n-p+1}^n(\nabla^2 f(E_i,E_j))^2 \right|\\ =&\left| \sum_{i,j,k=1}^n \nabla^2 f(E_i,E_j)\nabla^2 f(E_i,E_k)\langle\iota(E_j)\omega,\iota(E_k)\omega\rangle-\sum_{i=1}^n\sum_{j=n-p+1}^n(\nabla^2 f (E_i,E_j))^2 \right|\\ \leq&C |\nabla^2 f|^2 \delta^{1/25}. \end{align*} Thus, for all $i=1,\cdots,n$ and $j=1,\ldots,n-p$, we get $$ |\nabla^2 f(E_i,E_j)|^2\leq \left|\sum_{k=1}^n E^k\otimes\iota(\nabla_{E_k}\nabla f)\omega\right|^2+C |\nabla^2 f|^2 \delta^{1/25}, $$ and so $$ \frac{1}{l}\int_0^l|\nabla^2 f (E_i,E_j)|^2(s)\,d s \leq \delta^{1/5}+C\delta^{-1/50}\delta^{1/25} \leq C\delta^{1/50}. $$ This gives $$ \frac{1}{l}\int_0^l|\nabla^2 f (E_i,E_j)|(s)\,d s \leq C\delta^{1/100} $$ for all $i=1,\cdots,n$ and $j=1,\ldots,n-p$. Because \begin{align*} \left| \nabla^2 f+\frac{\Delta f}{n-p}\sum_{i=1}^{n-p}E^i\otimes E^i \right|^2 =|\nabla^2 f|^2-\frac{(\Delta f)^2}{n-p}-2\frac{\Delta f}{n-p}\sum_{i=n-p+1}^{n}\nabla^2 f (E_i,E_i), \end{align*} we have $$ \frac{1}{l}\int_0^l\left| \nabla^2 f+\frac{\Delta f}{n-p}\sum_{i=1}^{n-p}E^i\otimes E^i \right|^2\,d s \leq 2\delta^{1/5}+C\delta^{1/100}\leq C\delta^{1/100} $$ by (\ref{5i}). Since $\left\|f-\Delta f/(n-p)\right\|_{\infty}\leq C\delta^{1/2}$, we get (ii). Let us show the final assertion. It is trivial that $|\dot{\gamma}^E|$ is constant along $\gamma$. Since we have $$ \left(\nabla^2 f+f \sum_{i=1}^{n-p}E^i\otimes E^i\right)(\dot{\gamma},\dot{\gamma}) =\frac{d^2}{d s^2} f\circ \gamma + |\dot{\gamma}^E|^2 f\circ \gamma, $$ we get \begin{equation*} \int_0^l\left|\frac{d^2}{d s^2} f\circ \gamma(s) + |\dot{\gamma}^E|^2 f\circ \gamma(s)\right|\,d s\leq C\delta^{1/250}. \end{equation*} Thus, we get the lemma by Lemma \ref{trif}. \end{proof} \subsection{Almost Parallel $(n-p)$-form I} In this subsection, we suppose that Assumption \ref{asn-pform} holds instead of \ref{aspform}. If $M$ is orientable, then Assumption \ref{asn-pform} implies \ref{aspform}, and so we assume that $M$ is not orientable. We use the following notation. \begin{notation}\label{np5f} Take $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_k\}$ with $\|f\|_2^2=1/(n-p+1)$. Let $\pi\colon (\widetilde{M},\tilde{g})\to (M,g)$ be the two-sheeted oriented Riemannian covering. Put $ \tilde{f}:=f\circ \pi\in C^\infty(\widetilde{M})$, $\widetilde{\xi}:=\pi^\ast \xi\in\Gamma(\bigwedge^{n-p}T^\ast \widetilde{M})$ and $\omega:=\ast \widetilde{\xi}\in\Gamma(\bigwedge^{p}T^\ast \widetilde{M})$. Define $h_0,\ldots,h_6$, $Q_{\tilde{f}}$, $D_{\tilde{f}}(\tilde{y}_1)$, $R_{\tilde{f}}$ and $E_{\tilde{f}}(\tilde{y_1})$ as Notation \ref{np5d} for $\tilde{f}$, $\omega$ and $\tilde{y}_1\in \widetilde{M}$. Put \begin{align*} Q_f:=&M\setminus \pi\left(\widetilde{M}\setminus Q_{\tilde{f}}\right),\quad D_f(y_1):=&&M\setminus \pi\left(\widetilde{M}\setminus\bigcap_{\tilde{y}\in\pi^{-1}(y_1)} D_{\tilde{f}}(\tilde{y})\right),\\ R_f:=&M\setminus \pi\left(\widetilde{M}\setminus R_{\tilde{f}}\right),\quad E_f(y_1):=&&U_{y_1} M\setminus \bigcup_{\tilde{y}\in\pi^{-1}(y_1)}\pi_\ast\left(U_{\tilde{y}}\widetilde{M}\setminus E_{\tilde{f}}(\tilde{y})\right) \end{align*} for each $y_1\in M$. \end{notation} We immediately have the following lemmas by Lemmas \ref{p5d} and \ref{p5e}. \begin{Lem}\label{p5f} We have the following: \begin{itemize} \item[(i)] $\Vol(M\setminus Q_f)\leq C\delta^{1/100}\Vol(M)$, and $\Vol(M\setminus D_f(y_1))\leq2\delta^{1/100}\Vol(\widetilde{M})=4\delta^{1/100}\Vol(M)$ for each $y_1\in Q_f$. \item[(ii)] $\Vol(M\setminus R_f)\leq C\delta^{1/100}\Vol(M)$, and $\Vol(U_{y_1} M\setminus E_f(y_1))\leq2\delta^{1/100}\Vol(U_{y_1}M)$ for each $y_1\in R_f$. \item[(iii)] Take $y_1\in M$ and $y_2\in D_f(y_1)$ and one of the lift of $\gamma_{y_1,y_2}$: $$ \tilde{\gamma}_{y_1,y_2}\colon[0,d(y_1,y_2)]\to \widetilde{M}. $$ Put $\tilde{y}_1:=\tilde{\gamma}_{y_1,y_2}(0)\in \widetilde{M}$ and $\tilde{y}_2:=\tilde{\gamma}_{y_1,y_2}(d(y_1,y_2))\in \widetilde{M}$. Then, we have $\tilde{y}_2\in D_{\tilde{f}}(\tilde{y}_1)$. \item[(iv)] Take $y_1\in M$ and $u\in E_f(y_1)$ and one of the lift of $\gamma_u$: $$ \tilde{\gamma}_{u}\colon[0,\pi]\to \widetilde{M}. $$ Put $\tilde{y}_1:=\tilde{\gamma}_{u}(0)\in \widetilde{M}$ and $\tilde{u}:=\dot{\tilde{\gamma}}_{u}(0)\in U_{\tilde{y}_1}\widetilde{M}$. Then, we have $\tilde{u}\in E_{\tilde{f}}(\tilde{y}_1)$. \end{itemize} \end{Lem} \begin{Lem}\label{p5g} Suppose that a geodesic $\gamma\colon [0,l]\to M$ satisfies one of the following: \begin{itemize} \item There exist $x\in M$ and $y\in D_f(x)$ such that $l=d(x,y)$ and $\gamma=\gamma_{x,y}$, \item There exist $x\in M$ and $u\in E_f(x)$ such that $l=\pi$ and $\gamma=\gamma_u$. \end{itemize} Let $\tilde{\gamma}\colon [0,l]\to\widetilde{M}$ be one of the lift of $\gamma$. Then, we have $$ ||\omega|^2(\tilde{\gamma}(s))-1|\leq C\delta^{1/10},\quad |\iota(\nabla \tilde{f})(\omega)|\circ\tilde{\gamma}(s)\leq C\delta^{1/10} $$ for all $s\in [0,l]$, and at least one of the following: \begin{itemize} \item[(i)] $\frac{1}{l}\int_0^l|\nabla^2 f|\circ \gamma(s)\,d s\leq C\delta^{1/250}$, \item[(ii)] There exists a parallel orthonormal basis $\{E^1(s),\ldots,E^n(s)\}$ of $T_{\gamma(s)}^\ast M$ along $\gamma$ such that $$ |\xi-E^{1}\wedge\cdots\wedge E^{n-p}|(s)\leq C\delta^{1/25} $$ for all $s\in[0,s]$, and $$ \frac{1}{l}\int_0^l|\nabla^2 f+f\sum_{i=1}^{n-p}E^i\otimes E^i|(s)\, d s\leq C\delta^{1/200}. $$ \end{itemize} \end{Lem} \subsection{Eigenfunction and Distance} In this subsection, we suppose that either Assumption \ref{aspform} or \ref{asn-pform} holds. In the following, Lemma \ref{p5d} (resp. \ref{p5e}) shall be replaced by Lemma \ref{p5f} (resp. \ref{p5g}) under Assumption \ref{asn-pform}. The following proposition, which asserts that our function is an almost cosine function in some sense, is the goal of this subsection. See Notation \ref{np5d} (under Assumption \ref{aspform}) and Notation \ref{np5f} (under Assumption \ref{asn-pform}) for the definitions of $D_f$, $Q_f$, $E_f$ and $R_f$. \begin{Prop}\label{p53a} Take $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$ with $\|f\|_2^2=1/(n-p+1)$. There exists a point $p_f\in Q_f$ such that the following properties hold: \begin{itemize} \item[(i)] $\sup_M f\leq f(p_f)+C\delta^{1/100n}$ and $|f(p_f)-1|\leq C\delta^{1/800n}$, \item[(ii)] For any $x\in D_f(p_f)$ with $|\nabla f|(x)\leq \delta^{1/800n}$, we have $ ||f(x)|-1|\leq C\delta^{1/800n}. $ \item[(iii)] For any $x\in D_f(p_f)\cap Q_f\cap R_f$, we have $ |f(x)^2+|\nabla f|^2(x)-1|\leq C \delta^{1/800n}. $ \item[(iv)] Put $ A_f:=\{x\in M: |f(x)-1|\leq \delta^{1/900n}\}. $ Then, we have $$ |f(x)-\cos d(x,A_f)|\leq C\delta^{1/2000n} $$ for all $x\in M$, and $ \sup_{x\in M}d(x,A_f)\leq \pi+ C\delta^{1/100n}. $ \end{itemize} \end{Prop} \begin{proof} Take a maximum point $\tilde{p}\in M$ of $f$. Then, by the Bishop-Gromov theorem and Lemma \ref{p5d}, there exists a point $p_f\in Q_f$ with $d(\tilde{p},p_f)\leq C \delta^{1/100n}$. By Lemmas \ref{chya} and \ref{Linfes}, we have \begin{equation}\label{54b} |\nabla f|(p_f)\leq C\delta^{1/200n}. \end{equation} \begin{Clm}\label{c0} For any $x\in D_f(p_f)$ with $|\nabla f|(x)\leq C\delta^{1/800n}$, we have $$ ||f(x)|-|f(p_f)||\leq C\delta^{1/800n}. $$ \end{Clm} \begin{proof}[Proof of Claim \ref{c0}] Since $|\nabla f|(p_f)\leq C\delta^{1/200n}$ and $|\nabla f|(x)\leq C\delta^{1/800n},$ we get \begin{align*} |f\circ \gamma_{p_f,x}(s)-f(p_f)\cos ( |\dot{\gamma}_{p_f,x}^E| s)|&\leq C\delta^{1/200n},\\ |f\circ \gamma_{p_f,x}(d(p_f,x)-s)-f(x)\cos ( |\dot{\gamma}_{p_f,x}^E|s)|&\leq C\delta^{1/800n} \end{align*} for all $s\in[0,d(p_f,x)]$ by Lemma \ref{p5e}. Thus, we have \begin{align*} |f(x)-f(p_f)\cos ( |\dot{\gamma}_{p_f,x}^E| d(p_f,x))|&\leq C\delta^{1/200n},\\ |f(p_f)-f(x)\cos ( |\dot{\gamma}_{p_f,x}^E|d(p_f,x))|&\leq C\delta^{1/800n}, \end{align*} and so we get $||f(x)|-|f(p_f)||\leq C\delta^{1/800n}$. \end{proof} Similarly to $p_f$, we take a point $q_f\in Q_{f}(x)$ with $d(\tilde{q},q_f)\leq C\delta^{1/100n}$, where $\tilde{q}\in M$ is minimum point of $f$. By $\|f\|_{\infty}\geq\|f\|_2=1/\sqrt{n-p+1}$, we have $\max\{|f(p_f)|,|f(q_f)|\}\geq 1/\sqrt{n-p+1}-C\delta^{1/100n}$. Since $|\nabla f|(q_f)\leq C\delta^{1/200n}$, we have $|f(p_f)|\geq |f(q_f)|-C\delta^{1/800n}$ by Claim \ref{c0}. Therefore, we get \begin{equation}\label{54c0} f(p_f)\geq \frac{1}{\sqrt{n-p+1}}-C\delta^{1/800n}\geq\frac{1}{2\sqrt{n-p+1}}. \end{equation} \begin{Clm}\label{c1} Take $x\in M$ and $y\in D_f(x)$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis along $\gamma_{x,y}$ in Lemma \ref{p5e}. If $(i)$ holds in the lemma, we can assume that $E_1=\dot{\gamma}_{x,y}$. Then, we have \begin{align} \label{54ba}|\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle-\langle \nabla f,\dot{\gamma}_{x,y}^{E}(s)\rangle|&\leq C\delta^{1/25},\\ \label{54c} |\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle|&\leq |\nabla f(\gamma_{x,y}(s))||\dot{\gamma}_{x,y}^{E}|+C\delta^{1/25} \end{align} and \begin{equation*} \begin{split} \left|f\circ \gamma_{x,y}(s)-f(x)\cos (|\dot{\gamma}_{x,y}^{E}|s)-\frac{1}{|\dot{\gamma}_{x,y}^{E}|}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\sin (|\dot{\gamma}_{x,y}^{E}|s)\right|&\leq C\delta^{1/250},\\ \left| \langle \nabla f, \dot{\gamma}_{x,y}(s)\rangle+f(x)|\dot{\gamma}_{x,y}^{E}|\sin (|\dot{\gamma}_{x,y}^{E}|s)-\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\cos (|\dot{\gamma}_{x,y}^{E}|s)\right|&\leq C\delta^{1/250} \end{split} \end{equation*} for all $s\in[0,d(x,y)]$. \end{Clm} \begin{proof}[Proof of Claim \ref{c1}] If (i) holds in the lemma, $\dot{\gamma}_{x,y}=\dot{\gamma}_{x,y}^E$, and so (\ref{54ba}) and (\ref{54c}) are trivial. If (ii) in the lemma holds, we have $|\iota(\nabla f)(E^{n-p+1}\wedge\cdots\wedge E^n)|\leq C\delta^{1/25}$, and so $|\langle\nabla f(x),E_i\rangle|\leq C\delta^{1/25}$ for all $i=n-p+1,\ldots,n$. This gives (\ref{54ba}) and (\ref{54c}). We get the remaining part of the claim by Lemma \ref{p5e} putting $s_0=0$. \end{proof} \begin{Clm}\label{c2} For any $x\in Q_f\cap R_f$ with $|\nabla f|(x)\geq \delta^{1/800n}$, we have \begin{equation*} |f(x)^2+|\nabla f|^2(x)-f(p_f)^2|\leq C\delta^{1/800n}. \end{equation*} Moreover, there exists a point $y\in D_f(p_f)\cap D_f(x)$ such that the following properties hold. \begin{itemize} \item[(a)] $d(x,y)< \pi$, \item[(b)] $|f(p_f)-f(y)|\leq C \delta^{1/800n}$, \item[(c)] $|f(x)-f(p_f)\cos d(x,y)|\leq C \delta^{1/800n},$ \item[(d)] For any $z\in M$ with $d(x,z)\leq d(x,y)-\delta^{1/2000n}$, we have $f(p_f)-f(z)\geq \frac{1}{C}\delta^{1/1000n}$. \end{itemize} \end{Clm} \begin{proof}[Proof of Claim \ref{c2}] Take $x\in Q_f\cap R_f$ with $|\nabla f|(x)\geq \delta^{1/800n}$. By the definition of $R_f$, there exists a vector $u\in E_f(x)$ with \begin{equation*} \left| \frac{\nabla f}{|\nabla f|}(x)-u \right|\leq C \delta^{1/100n}. \end{equation*} Thus, we have \begin{equation}\label{54d} \Big|\langle\nabla f(x),\dot{\gamma}_u(0)\rangle-|\nabla f|(x)\Big|=|\nabla f|(x)-\langle\nabla f(x), u\rangle\leq C\delta^{1/100n}. \end{equation} Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis along $\gamma_{u}$ in Lemma \ref{p5e}. We first suppose that (ii) holds in the lemma. Then, for all $i=n-p+1,\ldots, n$, we have $|\langle\nabla f, E_i\rangle|\leq C\delta^{1/25}$, and so $$ |\langle u,E_i\rangle|\leq \left| u-\frac{\nabla f}{|\nabla f|}(x)\right|+\left|\langle \frac{\nabla f}{|\nabla f|}(x), E_i\rangle\right|\leq C\delta^{1/100n}+C\delta^{1/25}\delta^{-1/800n}\leq C\delta^{1/100n}. $$ Thus, we get $|\dot{\gamma}_u^E|^2=|u^E|^2=1-\sum_{i=n-p+1}^n\langle u, E_i\rangle^2\geq 1-C\delta^{1/100n}.$ If (i) holds in the lemma, we can assume $u=E_1$, and so $|\dot{\gamma}_u^E|=|u^E|=1$. For both cases, we get \begin{equation}\label{54e} \begin{split} |f\circ \gamma_u(s)-f(x)\cos s-|\nabla f|(x)\sin s|\leq& C\delta^{1/100n}\\ |\langle\nabla f,\dot{\gamma}_u(s)\rangle+f(x)\sin s-|\nabla f|(x)\cos s|\leq& C\delta^{1/100n} \end{split} \end{equation} for all $s\in [0,\pi]$ by (\ref{54d}). Take $s_0\in[0,\pi]$ such that \begin{align*} \frac{f(x)}{(f(x)^2+|\nabla f|^2(x))^{1/2}}=&\cos s_0,\\ \frac{|\nabla f|(x)}{(f(x)^2+|\nabla f|^2(x))^{1/2}}=&\sin s_0. \end{align*} Since $\sin s_0\geq \frac{1}{C}\delta^{1/800n}$ by the assumption, we have \begin{equation}\label{54ea} \frac{1}{C}\delta^{1/800n}\leq s_0\leq \pi-\frac{1}{C}\delta^{1/800n}. \end{equation} By the definition of $s_0$ and the formulas for $\cos (s-s_0)$ and $\sin(s-s_0)$, we have \begin{equation*} \begin{split} (f(x)^2+|\nabla f|^2(x))^{1/2}\cos (s-s_0)=&f(x)\cos s+|\nabla f|(x)\sin s,\\ (f(x)^2+|\nabla f|^2(x))^{1/2}\sin (s-s_0)=&f(x)\sin s-|\nabla f|(x)\cos s, \end{split} \end{equation*} and so we get \begin{equation}\label{54f} \begin{split} |f\circ \gamma_u(s_0)-(f(x)^2+|\nabla f|^2(x))^{1/2}|\leq& C\delta^{1/100n},\\ |\langle\nabla f,\dot{\gamma}_u(s_0)\rangle|\leq& C\delta^{1/100n} \end{split} \end{equation} by (\ref{54e}). Take $y\in D_f(p_f)\cap D_f(x)$ with $d(\gamma_u(s_0),y)\leq C\delta^{1/100n}$. We have \begin{equation}\label{54fa} d(x,y)\leq d(x,\gamma_u(s_0))+d(\gamma_u(s_0),y)\leq s_0+C\delta^{1/100n}. \end{equation} By (\ref{54f}), we get \begin{equation}\label{54fb} |f(y)-(f(x)^2+|\nabla f|^2(x))^{1/2}|\leq C\delta^{1/100n} \end{equation} Take a parallel orthonormal basis $\{\widetilde{E^1},\ldots,\widetilde{E^n}\}$ of $T^\ast M$ along $\gamma_{x,y}$ in Lemma \ref{p5e}. By (\ref{54ea}) and (\ref{54fa}), we get (a) and $$ \frac{1}{C}\delta^{1/800n}\leq |\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)+s_0 \leq 2\pi-\frac{1}{C}\delta^{1/800n}, $$ and so \begin{equation}\label{54g} \cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)+s_0)\leq 1-\frac{1}{C}\delta^{1/400n}. \end{equation} If $|\dot{\gamma}_{x,y}^{\widetilde{E}}|\leq \delta^{1/100}$, we have $|f(y)-f(x)|\leq C\delta^{1/250}$ by Claim \ref{c1}, and so $ (f(x)^2+|\nabla f|^2(x))^{1/2}-f(x)\leq C\delta^{1/100n} $ by (\ref{54fb}). This contradicts to $ |\nabla f|(x)\geq \delta^{1/800n}. $ Thus, we get $|\dot{\gamma}_{x,y}^{\widetilde{E}}|\geq \delta^{1/100}$. Then, we have \begin{equation}\label{54g1} \frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|}|\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle|\leq |\nabla f|(x)+C\delta^{3/100} \end{equation} and \begin{align*} &(f(x)^2+|\nabla f|^2(x))^{1/2}\\ \leq& f(y)+C\delta^{1/100n}\\ \leq& f(x)\cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y))+\frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\sin (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y))+C\delta^{1/100n}\\ \leq &\left(f(x)^2+\frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle^2\right)^{1/2}+C\delta^{1/100n} \end{align*} by Claim \ref{c1} and (\ref{54fb}). Thus, \begin{equation}\label{54g2} |\nabla f|^2(x) \leq \frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle^2 +C\delta^{1/100n}. \end{equation} By (\ref{54g1}) and (\ref{54g2}), we get \begin{equation}\label{54h0} \left|\frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle^2-|\nabla f|^2(x)\right|\leq C\delta^{1/100n}. \end{equation} This gives \begin{equation}\label{54h} \begin{split} &\left|\frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|}|\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle|-|\nabla f|(x)\right|\\ \leq& \left|\frac{1}{|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2}\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle^2-|\nabla f|^2(x)\right|\delta^{-1/800n}\leq C\delta^{7/800n}. \end{split} \end{equation} We show that $\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle> 0$. If $\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\leq 0$, we get \begin{equation*} \left|f(y)-f(x)\cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y))+ |\nabla f|\sin (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y))\right|\leq C\delta^{7/800n} \end{equation*} by (\ref{54h}) and Claim \ref{c1}, and so \begin{equation*} \left|f(y)-(f(x)^2+|\nabla f|^2(x))^{1/2}\cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)+s_0)\right|\leq C\delta^{7/800n}. \end{equation*} Thus, we get \begin{equation*} \begin{split} &(f(x)^2+|\nabla f|^2(x))^{1/2}\\ \leq &f(y)+C\delta^{1/100n}\\ \leq &(f(x)^2+|\nabla f|^2(x))^{1/2} \cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)+s_0)+C\delta^{7/800n}\\ \leq &(f(x)^2+|\nabla f|^2(x))^{1/2} -\frac{1}{C}\delta^{3/800n} \end{split} \end{equation*} by (\ref{54fb}), (\ref{54g}) and $|\nabla f|(x)\geq \delta^{1/800n}$. This is a contradiction. Therefore, we get $\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle>0$. Thus, \begin{equation}\label{54ha} \begin{split} \left|f(y)-(f(x)^2+|\nabla f|^2(x))^{1/2}\cos (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0)\right|&\leq C\delta^{7/800n},\\ \left| \langle \nabla f(y), \dot{\gamma}_{x,y}\rangle+|\dot{\gamma}_{x,y}^{\widetilde{E}}|(f(x)^2+|\nabla f|^2(x))^{1/2}\sin (|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0)\right|&\leq C\delta^{7/800n} \end{split} \end{equation} by (\ref{54h}) and Claim \ref{c1}. Then, we have \begin{align*} (f(x)^2+|\nabla f|^2(x))^{1/2} (1-\cos(|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0)) \leq C\delta^{7/800n} \end{align*} by (\ref{54fb}), and so $$ 1-\cos(|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0)\leq C\delta^{3/400n}. $$ by $|\nabla f|(x)\geq\delta^{1/800n}$. Since $-\pi<|\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0<\pi,$ we get \begin{equation}\label{54i} \left||\dot{\gamma}_{x,y}^{\widetilde{E}}|d(x,y)-s_0\right|\leq C\delta^{3/800n}. \end{equation} Thus, we have $s_0\leq |\dot{\gamma}_{x,y}^{\widetilde{E}}|s_0+ C\delta^{3/800n}$ by (\ref{54fa}), and so \begin{equation}\label{54j} 1-|\dot{\gamma}_{x,y}^{\widetilde{E}}| \leq C\delta^{1/400n} \end{equation} by (\ref{54ea}). Thus, we get \begin{equation}\label{54k} |d(x,y)-s_0|\leq C\delta^{1/400n}. \end{equation} By (\ref{54ha}) and (\ref{54i}), we have \begin{equation}\label{54l} |\langle\nabla f(y), \dot{\gamma}_{x,y}(d(x,y))\rangle|\leq C\delta^{3/800n}. \end{equation} We have \begin{equation}\label{54m} \begin{split} &\frac{d}{d s}\left(|\nabla f|^2(s)-\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle^2\right)\\ =&2\left(\langle\nabla_{\dot{\gamma}_{x,y}}\nabla f,\nabla f\rangle(s)-\langle\nabla_{\dot{\gamma}_{x,y}}\nabla f,\dot{\gamma}_{x,y}(s)\rangle\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle\right)\\ =&2\langle \nabla^2 f+ f\sum_{i=1}^{n-p}\widetilde{E}^i\otimes \widetilde{E}^i,\dot{\gamma}_{x,y}\otimes\nabla f\rangle(s)-2f\langle\nabla f,\dot{\gamma}_{x,y}^{\widetilde{E}}\rangle\\ &-2\langle \nabla^2 f+ f\sum_{i=1}^{n-p}\widetilde{E}^i\otimes \widetilde{E}^i,\dot{\gamma}_{x,y}\otimes\dot{\gamma}_{x,y} \rangle(s)\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle\\ &+2f|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle. \end{split} \end{equation} Thus, we get \begin{equation}\label{54n} \begin{split} &\left|\frac{d}{d s}\left(|\nabla f|^2(s)-\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle^2\right)\right|\\ \leq&C\left|\nabla^2 f+ f\sum_{i=1}^{n-p}\widetilde{E}^i\otimes \widetilde{E}^i\right| +C\left|\langle\nabla f,\dot{\gamma}_{x,y}^{\widetilde{E}}\rangle-|\dot{\gamma}_{x,y}^{\widetilde{E}}|^2\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle\right|.\\ \leq &C\left|\nabla^2 f+ f\sum_{i=1}^{n-p}\widetilde{E}^i\otimes \widetilde{E}^i\right|+ C\delta^{1/400n} \end{split} \end{equation} by (\ref{54ba}) and (\ref{54j}). By integration, we get $$ \int_0^{d(x,y)} \left|\frac{d}{d s}\left(|\nabla f|^2(s)-\langle\nabla f,\dot{\gamma}_{x,y}(s)\rangle^2\right)\right|\,d s \leq C\delta^{1/400n}, $$ and so \begin{equation*} \Big||\nabla f|^2(y)-\langle\nabla f(y),\dot{\gamma}_{x,y}(d(x,y))\rangle^2 -|\nabla f|^2(x)+\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle^2 \Big|\leq C\delta^{1/400n}. \end{equation*} Thus, we get \begin{equation*} |\nabla f|(y)\leq C\delta^{1/800n}. \end{equation*} by (\ref{54h0}), (\ref{54j}) and (\ref{54l}). By Claim \ref{c0} and (\ref{54c0}), we get $$ \left||f(y)|-f(p_f)\right|\leq C\delta^{1/800n}. $$ Since $$ f(y)\geq (f(x)^2+|\nabla f|^2(x))^{1/2}-C\delta^{1/100n}\geq \delta^{1/800n}-C\delta^{1/100n}>0 $$ by (\ref{54fb}), we get (b). We get \begin{equation}\label{54o} |(f(x)^2+|\nabla f|^2(x))^{1/2}-f(p_f)| \leq C\delta^{1/800n} \end{equation} by (\ref{54ha}), (\ref{54i}) and (b), and so we get (c) by the definition of $s_0$ and (\ref{54k}). (\ref{54o}) implies the first assertion. Finally, we show (d). Suppose that a point $z\in M$ satisfies $d(x,z)\leq d(x,y)-\delta^{1/2000n}$. Then, $d(x,y)\geq \delta^{1/2000n}$, and so $$f(x)\leq f(p_f)\cos d(x,y)+C\delta^{1/800n}\leq f(p_f)-\frac{1}{C}\delta^{1/1000n}$$ by (\ref{54c0}). There exists $w\in D_f(x)$ with $d(z,w)\leq C\delta^{1/100n}$. Let $\{\overline{E}^1,\ldots,\overline{E}^n\}$ be a parallel orthonormal basis along $\gamma_{x,w}$ in Lemma \ref{p5e}. If (i) holds in the lemma, we assume that $\overline{E}_1=\dot{\gamma}_{x,w}$. If $|\dot{\gamma}_{x,w}^{\overline{E}}|\leq \delta^{1/100}$, we have \begin{equation*} f(z)\leq f(w)+C\delta^{1/100n} \leq f(x)+ C\delta^{1/100n} \leq f(p_f)-\frac{1}{C}\delta^{1/1000n} \end{equation*} by Claim \ref{c1}. If $|\dot{\gamma}_{x,w}^{\overline{E}}|\geq \delta^{1/100}$, we have \begin{equation*} \begin{split} f(z)\leq& f(w)+C\delta^{1/100n}\\ \leq& f(x)\cos (|\dot{\gamma}_{x,w}^{\overline{E}}|d(x,z))+|\nabla f|(x)\sin (|\dot{\gamma}_{x,w}^{\overline{E}}|d(x,z))+C\delta^{1/100n}\\ \leq& f(p_f)\cos (|\dot{\gamma}_{x,w}^{\overline{E}}|d(x,z)-d(x,y))+\delta^{1/800n} \leq f(p_f)-\frac{1}{C}\delta^{1/1000n} \end{split} \end{equation*} by Claim \ref{c1}, (\ref{54k}), (\ref{54o}) and $-\pi\leq|\dot{\gamma}_{x,w}^{\overline{E}}|d(x,z)-d(x,y)\leq -\delta^{1/2000n}$. For both cases, we get (d). \end{proof} By Claims \ref{c0} and \ref{c2}, we get \begin{equation}\label{54p} |f(x)^2+|\nabla f|^2(x)-f(p_f)^2|\leq C\delta^{1/800n} \end{equation} for all $x\in D_f(p_f)\cap Q_f\cap R_f$. \begin{Clm}\label{c3} We have $ |f(p_f)-1|\leq C\delta^{1/800n}. $ \end{Clm} \begin{proof}[Proof of Claim \ref{c3}] Since $ \|f^2+|\nabla f|^2-f(p_f)^2\|_{\infty}\leq C $ and $ \Vol(M\setminus (D_f(p_f)\cap Q_f\cap R_f) )\leq C\delta^{1/100}, $ we get $$ \frac{1}{\Vol(M)}\int_M|f(x)^2+|\nabla f|^2(x)-f(p_f)^2| \,d\mu_g\leq C \delta^{1/800n} $$ by (\ref{54p}). By the assumption, we have $$ \frac{1}{\Vol(M)}\left|\int_M (f(x)^2+|\nabla f|^2(x)-1) \,d\mu_g\right|\leq C \delta^{1/2} $$ Thus, we get $ |f(p_f)^2-1|\leq C\delta^{1/800n}. $ Since $f(p_f)>0$, we get the claim. \end{proof} By Claims \ref{c0}, \ref{c3} and (\ref{54p}), we get (i), (ii) and (iii). Finally, we prove (iv). Put $ A_f:=\{x\in M: |f(x)-1|\leq \delta^{1/900n}\}. $ Since we have $ |f(w)-\cos d(w,A_f)|\leq \delta^{1/900n} $ for all $w\in A_f$, we get (iv) on $A_f$. Let us show (iv) on $M\setminus A_f$. Take $w\notin A_f$ and $x\in D_f(p_f)\cap Q_f\cap R_f $ with $d(w,x)\leq C\delta^{1/100n}$. We first suppose that $|\nabla f|(x)\geq \delta^{1/800n}$. Take $y\in D_f(p_f)\cap D_f(x)$ of Claim \ref{c2}. Then, $|f(y)-1|\leq C\delta^{1/800n}$, and so $y\in A_f$. Thus, \begin{equation}\label{54q} d(x, A_f)\leq d(x,y)<\pi. \end{equation} For all $z\in A_f$, we have $|f(p_f)-f(z)|\leq C\delta^{1/900n}$, and so $d(x,z)> d(x,y)-\delta^{1/2000n}$ by Claim \ref{c2} (d). Thus, \begin{equation}\label{54r} d(x,A_f)\geq d(x,y)-\delta^{1/2000n}. \end{equation} By (\ref{54q}) and (\ref{54r}), we get $ |d(x,A_f)- d(x,y)|\leq \delta^{1/2000n}. $ Therefore, we have $|f(x)-\cos d(x,A_f)|\leq C\delta^{1/2000n}$ by Claim \ref{c2} (c), and so $|f(w)-\cos d(w,A_f)|\leq C\delta^{1/2000n}$. By (\ref{54q}), we have $d(w,A_f)\leq \pi+C\delta^{1/100n}$. We next suppose that $|\nabla f|(x)\leq \delta^{1/800n}$. Then, $||f|(x)-1|\leq C\delta^{1/800n}$ by Claim \ref{c0}. If $f(x)\geq 0$, then $w \in A_f$. This contradicts to $w\notin A_f$. Thus, we have $|f(x)+1|\leq C\delta^{1/800n}$. We see that (i) in Lemma \ref{p5e} cannot occur for $\gamma_{p_f,x}$ because we have $$ |\nabla^2 f|\geq \frac{1}{\sqrt{n}}|\Delta f|\geq\frac{n-p}{\sqrt{n}}|f|-C\delta^{1/2}. $$ Thus, there exists an orthonormal basis $\{e^1,\ldots,e^n\}$ of $T_x^\ast M$ such that $|\omega(x)-e^{n-p+1}\wedge\cdots\wedge e^n|\leq C\delta^{1/25}$ if Assumption \ref{aspform} holds, and $|\xi(x)-e^{1}\wedge\cdots\wedge e^{n-p}|\leq C\delta^{1/25}$ if Assumption \ref{asn-pform} holds. Take $u\in E_f(x)$ with $|u-e_1|\leq C\delta^{1/100n}$. Then, we get $ |f\circ\gamma_u(s)+\cos s|\leq C\delta^{1/800n} $ for all $s\in [0,\pi]$ by Lemma \ref{p5e}. Thus, we get $\gamma_u(\pi)\in A_f$, and so \begin{equation}\label{54s} d(w,A_f)\leq \pi+C\delta^{1/100n}. \end{equation} For any $y\in A_f$, there exists $z\in D_f(x)$ with $d(y,z)\leq C\delta^{1/100n}$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{x,z}$ of Claim\ref{c1}. Then, \begin{equation*} |1+\cos ( |\dot{\gamma}_{x,z}^{E}| d(x,z))|\leq C\delta^{1/900n} \end{equation*} by Claim \ref{c1}. Thus, we get $d(x,z)\geq\pi-C\delta^{1/1800n}$, and so \begin{equation}\label{54t} d(w,A_f)\geq \pi- C\delta^{1/1800n}. \end{equation} By (\ref{54s}) and (\ref{54t}), we get $|d(w,A_f)-\pi|\leq C\delta^{1/1800n}$, and so $|f(w)-\cos d(w,A_f)|\leq C\delta^{1/1800n}$. For both cases, we get (iv). \end{proof} \subsection{Gromov-Hausdorff Approximation} In this subsection, we suppose that Assumption \ref{asu1} for $k=n-p+1$ and either Assumption \ref{aspform} or \ref{asn-pform} hold. We construct a Gromov-Hausdorff approximation map, and show that the Riemannian manifold is close to the product metric space $S^{n-p}\times X$ in the Gromov-Hausdorff topology. The following proposition is based on \cite[Lemma 5.2]{Pe1}. \begin{Lem}\label{p54a} Define $\widetilde{\Psi}:=(f_1,\dots,f_{n-p+1})\colon M\to \mathbb{R}^{n-p+1}$. Then, we have $$ \||\widetilde{\Psi}|^2-1\|_{\infty}\leq C\delta^{1/1000n^2}. $$ \end{Lem} \begin{proof} We first prove the following claim: \begin{Clm}\label{p54b} For any $x\in M$, we have $|\widetilde{\Psi}|(x)\leq 1+C\delta^{1/800n}$ \end{Clm} \begin{proof}[Proof of Claim \ref{p54b}] If $|\widetilde{\Psi}|(x)=0$, the claim is trivial. Thus, we assume that $|\widetilde{\Psi}|(x)\neq 0$. Put $$f_x:=\frac{1}{|\widetilde{\Psi}|(x)}\sum_{i=1}^{n-p+1} f_i(x)f_i.$$ Then, we have $\|f_x\|_2^2=1/(n-p+1).$ Thus, we get $ |\widetilde{\Psi}|(x)=f_x(x)\leq 1+ C\delta^{1/800n}$ by Proposition \ref{p53a} (i). \end{proof} For $x\in M$ with $|\widetilde{\Psi}(x)|^2-1< 0$, we have $||\widetilde{\Psi}(x)|^2-1|=1-|\widetilde{\Psi}(x)|^2$. For $x\in M$ with $|\widetilde{\Psi}(x)|^2-1\geq 0$, we have $||\widetilde{\Psi}(x)|^2-1|=|\widetilde{\Psi}(x)|^2-1 \leq 1-|\widetilde{\Psi}(x)|^2+C\delta^{1/800n}$ by Claim \ref{p54b}. For both cases, we have $||\widetilde{\Psi}(x)|^2-1|\leq 1-|\widetilde{\Psi}(x)|^2+C\delta^{1/800n}$. Combining this and $\|\widetilde{\Psi}\|_2=1$, we get $ \||\widetilde{\Psi}|^2-1\|_1 \leq C \delta^{1/800n}.$ Therefore, we have $$ \Vol(\{x\in M:||\widetilde{\Psi}(x)|^2-1|\geq \delta^{1/1000n^2}\})\leq C\delta^{1/800n}\delta^{-1/1000n^2}\leq C\delta^{1/1000n} $$ (note that we assumed $n\geq 5$). This and the Bishop-Gromov inequality imply that, for any $x\in M$, there exists $y\in\{x\in M:||\widetilde{\Psi}(x)|^2-1|< \delta^{1/1000n^2}\}$ with $d(x,y)\leq C\delta^{1/1000n^2}$, and so $||\widetilde{\Psi}(x)|^2-1|\leq C\delta^{1/1000n^2}$ by $\|\nabla|\widetilde{\Psi}|^2\|_\infty\leq C$. Thus, we get the lemma. \end{proof} \begin{notation} In the remaining part of this subsection, we use the following notation. \begin{itemize} \item Let $d_S$ denotes the intrinsic distance function on $S^{n-p}(1)$. Note that we have $\cos d_S(x,y)=x\cdot y$ and $$d_{\mathbb{R}^{n-p+1}}(x,y)\leq d_{S}(x,y)\leq 3 d_{\mathbb{R}^{n-p+1}}(x,y)$$ for all $x,y\in S^{n-p}\subset\mathbb{R}^{n-p+1}$. \item For each $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, we use the notation $p_f$ and $A_f$ of Proposition \ref{p53a}. Recall that we defined $ A_f:=\{x\in M: |f(x)-1|\leq \delta^{1/900n}\}. $ \item Define $\widetilde{\Psi}:=(f_1,\dots,f_{n-p+1})\colon M\to \mathbb{R}^{n-p+1}$ and $$ \Psi:=\frac{\widetilde{\Psi}}{|\widetilde{\Psi}|}\colon M\to S^{n-p}. $$ \item For each $x\in M$, put $$ f_x:=\frac{1}{|\widetilde{\Psi}|(x)}\sum_{i=1}^{n-p+1} f_i(x)f_i=\sum_{i=1}^{n-p+1} \Psi_i(x)f_i, $$ $p_x:=p_{f_x}$ and $A_x:=A_{f_x}$. \item For each $x\in M$ and $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, choose $a_f(x)\in A_f$ such that $d(x,A_f)=d(x,a_f(x)).$ \end{itemize} \end{notation} The goal of this subsection is to show that $$ \Phi_f \colon M\to S^{n-p}\times A_f,\,x\mapsto (\Psi(x),a_f(x)) $$ is a Gromov-Hausdorff approximation map. \begin{Lem}\label{p54c00} For all $x,y\in M$, we have $|\Psi(x)-\Psi(y)|\leq Cd(x,y).$ \end{Lem} \begin{proof} Since we have $\|\nabla f_i\|_{\infty}\leq C$ for all $i\in\{1,\ldots,n-p+1\}$, we get $|\widetilde{\Psi}(x)-\widetilde{\Psi}(y)|\leq Cd(x,y)$ for all $x,y\in M$. Thus, we get the lemma by Lemma \ref{p54a} ($|\widetilde{\Psi}|\geq1/2$). \end{proof} \begin{Lem}\label{p54c0} Take $u\in S^{n-p}$ and put $f=\sum_{i=1}^{n-p+1}u_i f_i$. Then, we have $$ |d_S(\Psi(y),u)-d(y,A_{f})|\leq C\delta^{1/2000n^2} $$ for all $y\in M$. \end{Lem} \begin{proof} Since $ f(y)=u\cdot\widetilde{\Psi}(y), $ we have $ |u \cdot\widetilde{\Psi}(y)-\cos d(y,A_{f})|\leq C\delta^{1/2000n} $ by Proposition \ref{p53a}, and so $$ |u\cdot \Psi(y)-\cos d(y,A_{f})|\leq C\delta^{1/1000n^2} $$ by Lemma \ref{p54a}. Since $\cos d_S(\Psi(y),u)=u\cdot \Psi(y)$, this and $d(y,A_{f})\leq \pi+C\delta^{1/100n}$ imply the lemma. \end{proof} By the definition of $A_{y}$, we immediately get the following corollaries: \begin{Cor}\label{p54c01} Take $u\in S^{n-p}$ and put $f=\sum_{i=1}^{n-p+1}u_i f_i$. Then, we have $ d_S(\Psi(p_f),u)\leq C\delta^{1/2000n^2}. $ \end{Cor} \begin{Cor}\label{p54c} For each $y_1,y_2\in M$, we have $$ |d_S(\Psi(y_1),\Psi(y_2))-d(y_2,A_{y_1})|\leq C\delta^{1/2000n^2}. $$ \end{Cor} \begin{Cor}\label{p54c1} For each $y\in M$, we have $ d(y,A_{y})\leq C\delta^{1/2000n^2}. $ \end{Cor} We need to show the almost Pythagorean theorem for our purpose. To do this, we regard $|\dot{\gamma}^E| s$ in Lemma \ref{p5e} as a moving distance in $S^{n-p}$. We first approximate their cosine. \begin{Lem}\label{p54d} Take $y_1\in M$, $\tilde{y}_1\in D_{f_{y_1}}(p_{y_1})\cap R_{f_{y_1}}\cap Q_{f_{y_1}}$ with $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$ and $y_2\in D_{f_{y_1}}(\tilde{y}_1)$ $($note that we can take such $\tilde{y}_1$ for any $y_1$ by the Bishop-Gromov theorem$)$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{\tilde{y}_1,y_2}$ in Lemma \ref{p5e} for $f_{y_1}$. Then, $(ii)$ holds in the lemma, and $$ |\cos(|\dot{\gamma}_{\tilde{y}_1,y_2}^E|s)-\cos d_S(\Psi(y_1),\Psi(\gamma_{\tilde{y}_1,y_2}(s)))|\leq C\delta^{1/2000n^2} $$ for all $s\in[0,d(\tilde{y}_1,y_2)]$. In particular, we have $$ |\cos(|\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2))-\cos d_S(\Psi(y_1),\Psi(y_2))|\leq C\delta^{1/2000n^2}. $$ \end{Lem} \begin{proof} By Corollary \ref{p54c1}, we have $ d(\tilde{y}_1,A_{y_1})\leq C\delta^{1/2000n^2}, $ and so we get \begin{equation*} f_{y_1}\circ \gamma_{\tilde{y}_1,y_2}(s) \geq \cos d(\gamma_{\tilde{y}_1,y_2}(s),A_{y_1})- C\delta^{1/2000n} \geq \cos s- C\delta^{1/2000n^2} \geq \frac{1}{\sqrt{2}}- C\delta^{1/2000n^2} \end{equation*} for all $s\leq\min\{\pi/4,d(\tilde{y}_1,y_2)\}$. Therefore, we have \begin{equation*} |\nabla^2 f_{y_1}|(\gamma_{\tilde{y}_1,y_2}(s)) \geq \frac{1}{\sqrt{n}}|\Delta f_{y_1}|(\gamma_{\tilde{y}_1,y_2}(s)) \geq \frac{n-p}{\sqrt{2n}}- C\delta^{1/2000n^2} \end{equation*} for all $s\leq\min\{\pi/4,d(\tilde{y}_1,y_2)\}$. Thus, (i) in Lemma \ref{p5e} cannot occur, and so (ii) holds in the lemma. Since we have $f_{y_1}(y_1)=|\widetilde{\Psi}(y_1)|$, we get \begin{equation}\label{ad1} |f_{y_1}(\tilde{y}_1)-1|\leq C\delta^{1/1000n^2} \end{equation} by Lemma \ref{p54a} and $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$. By (\ref{ad1}) and Proposition \ref{p53a} (iii), we have $ |\nabla f_{y_1}|(\tilde{y}_1)\leq C\delta^{1/2000n^2}. $ Thus, we get $$ |f_{y_1}(\gamma_{\tilde{y}_1,y_2}(s))-\cos(|\dot{\gamma}_{\tilde{y}_1,y_2}^E|s)|\leq C\delta^{1/2000n^2} $$ for all $s\in[0,d(\tilde{y}_1,y_2)]$ by Lemma \ref{p5e}. On the other hand, we have $$ |f_{y_1}(\gamma_{\tilde{y}_1,y_2}(s))-\cos d_S(\Psi(y_1),\Psi(\gamma_{\tilde{y}_1,y_2}(s)))|\leq C\delta^{1/2000n^2} $$ for all $s\in[0,d(\tilde{y}_1,y_2)]$ by Proposition \ref{p53a} (iv) and Corollary \ref{p54c}. Thus, we get the lemma. \end{proof} \begin{notation} We use the following notation: \begin{itemize} \item For any $y_1,y_2\in M$ and $f\in \Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, define \begin{align*} &G_f^{y_1}(y_2)\\ :=&\langle\dot{\gamma}_{y_2,y_1}(0),\nabla f(y_2)\rangle d(y_1,y_2)\sin d_S(\Psi(y_1),\Psi(y_2))\\ &\quad +\Big(\cos d(y_2, A_f)\cos d_S(\Psi(y_1),\Psi(y_2))-\cos d(y_1,A_f)\Big) d_S(\Psi(y_1),\Psi(y_2)). \end{align*} \item For any $y_1,y_2\in M$, define \begin{empheq}[left={H^{y_1}(y_2):=\empheqlbrace}]{align*} &1 \qquad d(y_1,y_2)\leq \pi,\\ &0 \qquad d(y_1,y_2)>\pi. \end{empheq} \item For any $y_1,y_2\in M$ and $f\in \Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, define \begin{align*} C_f^{y_1}(y_2):=&\Big\{y_3\in M : \gamma_{y_2,y_3}(s)\in I_{y_1}\setminus\{y_1\} \text{ for almost all $s\in[0,d(y_2,y_3)]$, and}\\ &\qquad \qquad\qquad \qquad \int_{0}^{d(y_2,y_3)} |G_f^{y_1}H^{y_1}|(\gamma_{y_2,y_3}(s))\,d s\leq \delta^{1/12000n^2}\Big\},\\ P_f^{y_1}:=&\{y_2\in M: \Vol(M\setminus C_f^{y_1}(y_2))\leq\delta^{1/12000n^2}\Vol(M)\}. \end{align*} \end{itemize} \end{notation} Pinching condition on $G_f^{y_1}$ plays a crucial role for our purpose. Let us estimate $G_f^{y_1}$. \begin{Lem}\label{p54e} Take $\eta>0$ with $\eta\geq \delta^{1/2000n}$, $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, $y_1\in Q_f$ and $y_2\in D_f(y_1)$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{y_1,y_2}$ in Lemma \ref{p5e} for $f$. If $$ ||\dot{\gamma}_{y_1,y_2}^E|d(y_1,y_2)-d_S(\Psi(y_1),\Psi(y_2))|\leq \eta, $$ then $ |G_f^{y_1}(y_2)|\leq C\eta. $ \end{Lem} \begin{proof} We have \begin{align*} \Big|f(y_1)-f(y_2)\cos &(|\dot{\gamma}_{y_1,y_2}^E|d(y_1,y_2))\\ &-\frac{1}{|\dot{\gamma}_{y_1,y_2}^E|}\langle\nabla f(y_2),\dot{\gamma}_{y_2,y_1}(0)\rangle\sin (|\dot{\gamma}_{y_1,y_2}^E|d(y_1,y_2)) \Big| \leq C\delta^{1/250} \end{align*} by Lemma \ref{p5e}. Thus, by Proposition \ref{p53a} (iv), we get \begin{align*} \Big||\dot{\gamma}_{y_1,y_2}^E|\cos d(y_1,A_f)&-|\dot{\gamma}_{y_1,y_2}^E|\cos d(y_2, A_f)\cos (|\dot{\gamma}_{y_1,y_2}^E|d(y_1,y_2))\\ &-\langle\nabla f(y_2),\dot{\gamma}_{y_2,y_1}(0)\rangle\sin (|\dot{\gamma}_{y_1,y_2}^E|d(y_1,y_2)) \Big| \leq C\delta^{1/2000n}, \end{align*} and so we get the lemma. \end{proof} The quantity $|\dot{\gamma}_{y_1,y_2}^E|$ in the above lemma is slightly different from that of Lemma \ref{p54d}. Comparing these two quantity, we get the following: \begin{Cor}\label{p54f0} Take $\eta>0$ with $\eta\geq \delta^{1/2000n}$, $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, $y_1\in M$, $\tilde{y}_1\in D_{f_{y_1}}(p_{y_1})\cap R_{f_{y_1}}\cap Q_{f_{y_1}}\cap Q_f$ with $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$ and $y_2\in D_{f_{y_1}}(\tilde{y}_1)\cap D_f(\tilde{y}_1)$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{\tilde{y}_1,y_2}$ in Lemma \ref{p5e} for $f_{y_1}$. If $$ ||\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)-d_S(\Psi(\tilde{y}_1),\Psi(y_2))|\leq \eta, $$ then $ |G_f^{\tilde{y}_1}(y_2)|\leq C\eta. $ \end{Cor} \begin{proof} Let $\{\widetilde{E}^1,\ldots,\widetilde{E}^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{\tilde{y}_1,y_2}$ in Lemma \ref{p5e} for $f$ (if (i) holds, then we can assume that $\widetilde{E}^i=E^i$ for all $i$). We show that $ \left||\dot{\gamma}_{\tilde{y}_1,y_2}^E|-|\dot{\gamma}_{\tilde{y}_1,y_2}^{\widetilde{E}}|\right|\leq C\delta^{1/50}. $ Then, we immediately get the corollary by Lemma \ref{p54e}. We first suppose that Assumption \ref{aspform} holds. We have $|\omega(y_2)-E^{n-p+1}\wedge \cdots\wedge E^n|\leq C\delta^{1/25}$ by Lemmas \ref{p5e} and \ref{p54d}. Since $|\dot{\gamma}_{\tilde{y}_1,y_2}^E|^2=1-|\iota(\dot{\gamma}_{\tilde{y}_1,y_2})(E^{n-p+1}\wedge \cdots\wedge E^n)|^2$, we get \begin{equation}\label{55e} \left||\dot{\gamma}_{\tilde{y}_1,y_2}^E|^2-\left(1-|\iota(\dot{\gamma}_{\tilde{y}_1,y_2})\omega|^2(y_2)\right)\right|\leq C\delta^{1/25}. \end{equation} Similarly, we get \begin{equation}\label{55f} \left||\dot{\gamma}_{\tilde{y}_1,y_2}^{\widetilde{E}}|^2-\left(1-|\iota(\dot{\gamma}_{\tilde{y}_1,y_2})\omega|^2(y_2)\right)\right|\leq C\delta^{1/25}. \end{equation} By (\ref{55e}) and (\ref{55f}), we get $ \left||\dot{\gamma}_{\tilde{y}_1,y_2}^E|-|\dot{\gamma}_{\tilde{y}_1,y_2}^{\widetilde{E}}|\right|\leq C\delta^{1/50}. $ We next suppose that Assumption \ref{asn-pform} holds. Similarly, we have \begin{align*} \left||\dot{\gamma}_{\tilde{y}_1,y_2}^E|^2-|\iota(\dot{\gamma}_{\tilde{y}_1,y_2})\xi|^2(y_2)\right|\leq& C\delta^{1/25},\\ \left||\dot{\gamma}_{\tilde{y}_1,y_2}^{\widetilde{E}}|^2-|\iota(\dot{\gamma}_{\tilde{y}_1,y_2})\xi|^2(y_2)\right|\leq& C\delta^{1/25}, \end{align*} and so $ \left||\dot{\gamma}_{\tilde{y}_1,y_2}^E|-|\dot{\gamma}_{\tilde{y}_1,y_2}^{\widetilde{E}}|\right|\leq C\delta^{1/50}. $ By the above two cases, we get the corollary. \end{proof} Let us show the integral pinching condition. \begin{Lem}\label{p54f} Take $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, $y_1\in M$ and $\tilde{y}_1\in D_{f_{y_1}}(p_{y_1})\cap R_{f_{y_1}}\cap Q_{f_{y_1}}\cap Q_f$ with $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$. Then, $\|G_f^{\tilde{y}_1} H_{\tilde{y}_1}\|_1\leq C\delta^{1/4000n^2}$ and $ \Vol(M\setminus P_f^{\tilde{y}_1})\leq C\delta^{1/12000n^2}. $ \end{Lem} \begin{proof} Take arbitrary $y_2\in D_f(\tilde{y}_1)\cap D_{f_{y_1}}(\tilde{y}_1)$. Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{\tilde{y}_1,y_2}$ in Lemma \ref{p5e} for $f_{y_1}$. Then, we have $ ||\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)-d_S(\Psi(\tilde{y}_1),\Psi(y_2))|\leq C\delta^{1/4000n^2}, $ if $d(\tilde{y}_1,y_2)\leq \pi$ by Lemmas \ref{p54c00} and \ref{p54d}. Thus, by Corollary \ref{p54f0}, we have $$ \sup_{D_f(\tilde{y}_1)\cap D_{f_{y_1}}(\tilde{y}_1)}|G_f^{\tilde{y}_1} H^{\tilde{y}_1}|\leq C\delta^{1/4000n^2}. $$ Since $\Vol(M\setminus (D_f(\tilde{y}_1)\cap D_{f_{y_1}}(\tilde{y}_1)))\leq C\delta^{1/100}\Vol(M)$ and $\|G_f^{\tilde{y}_1} H^{\tilde{y}_1}\|_\infty\leq C$, we get $\|G_f^{\tilde{y}_1} H^{\tilde{y}_1}\|_1\leq C\delta^{1/4000n^2}.$ By the segment inequality (Theorem \ref{seg}), we get the remaining part of the lemma. \end{proof} \begin{notation}\label{order} We use the following notation. $$\eta_0=\delta^{1/12000n^3},\, \eta_1=\eta_0^{1/26}, \, \eta_2=\eta_1^{1/78}\text{ and } L=\eta_2^{1/150}. $$ \end{notation} We use Lemma \ref{p54f} to give the almost Pythagorean theorem for the special case (see Lemma \ref{p54l}). For the general case, we need to estimate $\|G_f^{\tilde{y}_1}\|_1$. To do this, we show that $|\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)\leq \pi+L$ under the assumption of Lemma \ref{p54d} in Lemma \ref{p54n}. Then, we can estimate $\|G_f^{\tilde{y}_1}\|_1$ similarly to Lemma \ref{p54f}. After proving that, we use Lemma \ref{p54i} again to give the almost Pythagorean theorem for the general case. The following lemma, which guarantees that an almost shortest pass from a point in $M$ to $A_f$ almost corresponds to a geodesic in $S^{n-p}$ through $\Psi$ under some assumptions, is the first step to achieve these objectives. \begin{Lem}\label{p54g} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $u\in S^{n-p}$ with $f=\sum_{i=1}^{n-p+1}u_i f_i$, \item $x,y\in M$, \item $\eta>0$ with $\eta_0\leq\eta\leq L^{1/3n}$. \end{itemize} Suppose \begin{itemize} \item $d(y,A_f)\leq C \eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$. \end{itemize} Then, we have the following for all $s,s'\in[0,d(x,y)]$: \begin{itemize} \item[(i)] $|d(\gamma_{y,x}(s),A_f)-s|\leq C\eta$, \item[(ii)] $\left||s-s'|-d_S\left(\Psi(\gamma_{y,x}(s)),\Psi(\gamma_{y,x}(s'))\right)\right|\leq C\eta$, \item[(iii)] If in addition $d(x,A_f)\geq \frac{1}{C}\eta^{1/26}$, there exists $v\in S^{n-p}$ such that $u\cdot v=0$ and $$ d_S(\Psi(\gamma_{y,x}(s)),\gamma_v(s))\leq C\eta^{3/13} $$ for all $s\in[0,d(x,y)]$, where we define $\gamma_v(s):=(\cos s) u+(\sin s) v\in S^{n-p}$. \end{itemize} \end{Lem} \begin{proof} We first prove (i). We have $ d(\gamma_{y,x}(s),A_f)\leq s+ C\eta $ and \begin{align*} d(x,y)-C\eta\leq d(x,A_f)\leq d(\gamma_{y,x}(s),A_f)+d(x,y)-s. \end{align*} Thus, we get (i). We next prove (ii). By Lemma \ref{p54c0}, we have $d_S(\Psi(y),u)\leq C\eta$ and $|d_S(\Psi(\gamma_{y,x}(s)),u)-d(\Psi(\gamma_{y,x}(s)),A_f)|\leq C\delta^{1/2000n^2}$, and so we get \begin{equation}\label{55i} |s-d_S(\Psi(\gamma_{y,x}(s)),\Psi(y))|\leq C \eta \end{equation} for all $s\in[0,d(x,y)]$ by (i). Take arbitrary $s,s'\in[0,d(x,y)]$ with $s<s'$. Then, \begin{equation}\label{55j} \begin{split} s'-s=d(\gamma_{y,x}(s),\gamma_{y,x}(s'))&\geq d(\gamma_{y,x}(s),A_{\gamma_{y,x}(s')})-d(\gamma_{y,x}(s'),A_{\gamma_{y,x}(s')})\\ &\geq d_S(\Psi(\gamma_{y,x}(s)),\Psi(\gamma_{y,x}(s')))-C\delta^{1/2000n^2} \end{split} \end{equation} by Corollaries \ref{p54c} and \ref{p54c1}. On the other hand, we have \begin{equation*} \begin{split} s'-C\eta\leq& d_S(\Psi(\gamma_{y,x}(s')),\Psi(y))\\ \leq &d_S(\Psi(\gamma_{y,x}(s)),\Psi(\gamma_{y,x}(s')))+d_S(\Psi(\gamma_{y,x}(s)),\Psi(y))\\ \leq &d_S(\Psi(\gamma_{y,x}(s)),\Psi(\gamma_{y,x}(s'))) +s+C\eta \end{split} \end{equation*} by (\ref{55i}), and so \begin{equation}\label{55k} s'-s\leq d_S(\Psi(\gamma_{y,x}(s)),\Psi(\gamma_{y,x}(s'))) +C\eta. \end{equation} By (\ref{55j}) and (\ref{55k}), we get (ii). Finally, we prove (iii). Since $d(x,A_f)\geq\frac{1}{C}\eta^{1/26}$, there exists $s_0\in[0,d(x,y)]$ such that $\frac{1}{C}\eta^{1/26}\leq d(z,y)\leq \pi- \frac{1}{C}\eta^{1/26}$, where we put $z=\gamma_{y,x}(s_0)$. Then, there exists $v\in S^{n-p}$ with $u\cdot v=0$ and $t_1\in[0,\pi]$ such that $ \Psi(z)=(\cos t_1) u+(\sin t_1) v. $ We have \begin{align*} |\cos t_1-\cos d(z,y)|=&|\cos d_S(\Psi(z),u)-\cos s_0|\\ \leq& |\cos d(z,A_f)-\cos s_0|+C\delta^{1/2000n^2} \leq C\eta \end{align*} by Lemma \ref{p54c0} and (i). This gives \begin{equation}\label{55l} |t_1-d(z,y)|\leq C\eta^{1/2}. \end{equation} Take arbitrary $s\in [0,d(x,y)]$. Then, there exist $w\in S^{n-p}$ and $x_1,x_2,x_3\in \mathbb{R}$ such that $w\perp \Span_{\mathbb{R}}\{u,v\}$, $x_1^2+x_2^2+x_3^2=1$ and $ \Psi(\gamma_{y,x}(s))=x_1 u+x_2 v+ x_3 w. $ Since we have $ |s-d_S(\Psi(\gamma_{y,x}(s)),u)|\leq C\eta $ by (i) and Lemma \ref{p54c0}, and $\cos d_S(\Psi(\gamma_{y,x}(s)),u)=x_1$, we get \begin{equation}\label{55m} |\cos s- x_1|\leq C\eta. \end{equation} We have $$ \left||d(z,y)-s|-d_S(\Psi(\gamma_{y,x}(s)),\Psi(z))\right|\leq C\eta $$ by (ii). Since $\cos d_S(\Psi(\gamma_{y,x}(s)),\Psi(z))=x_1 \cos t_1+x_2\sin t_1$, we get \begin{equation}\label{55n} |\cos(d(z,y)-s)- x_1 \cos d(z,y)-x_2\sin d(z,y)|\leq C\eta^{1/2} \end{equation} by (\ref{55l}). By (\ref{55m}) and (\ref{55n}), we have $ \sin d(z,y)|\sin s- x_2|\leq C\eta^{1/2}. $ By the assumption, we have $ \sin d(z,y)\geq \frac{1}{C}\eta^{1/26}, $ and so we get \begin{equation}\label{55o} |\sin s- x_2|\leq C\eta^{6/13}. \end{equation} By (\ref{55m}) and (\ref{55o}), we get \begin{equation*} |\cos d_S(\Psi(\gamma_{y,x}(s)),\gamma_v(s))-1| =|x_1 \cos s+x_2\sin s-1|\leq C\eta^{6/13}. \end{equation*} Thus, we get (iii). \end{proof} The following lemma asserts that the differential of an almost shortest pass from a point in $M$ to $A_f$ is in the direction of $\nabla f$ under some assumptions. \begin{Lem}\label{p54h} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $x\in D_f(p_f)\cap Q_f \cap R_f$, \item $y\in D_f(x)\cap D_f(p_f)\cap Q_f\cap R_f$, \item $\eta>0$ with $\eta_0\leq\eta\leq L^{1/3n}$. \end{itemize} Suppose \begin{itemize} \item $d(x,A_f)\geq\frac{1}{C}\eta^{1/26}$, \item $d(y,A_f)\leq C \eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$. \end{itemize} Let $\{E^1,\ldots,E^n\}$ be a parallel orthonormal basis of $T^\ast M$ along $\gamma_{x,y}$ in Lemma \ref{p5e} for $f$. Then, we have the following for all $s\in[0,d(x,y)]$: \begin{itemize} \item[(i)] $||\dot{\gamma}^E_{x,y}|-1|\leq C \eta^{6/13}$, \item[(ii)] $|\nabla f (\gamma_{y,x}(s))+(\sin s) \dot{\gamma}_{y,x}(s)|\leq C\eta^{3/26}$. \end{itemize} \end{Lem} \begin{proof} We first note that we have \begin{equation}\label{55p} d(x,y)\leq \pi+C\eta \end{equation} by the assumption and Proposition \ref{p53a} (iv). Let us prove (i). By $d(y,A_f)\leq C \eta$, we have $\cos d(y,A_f)\geq 1- C\eta^2$. Thus, we have \begin{equation}\label{55q} |1-f(y)|\leq C\eta^2 \end{equation} by Proposition \ref{p53a} (iv). By Proposition \ref{p53a} (iii), we get $ |\nabla f|(y)\leq C\eta. $ Thus, we have \begin{equation}\label{55r} |f(x)-\cos(|\dot{\gamma}_{x,y}^E|d(x,y))|\leq C\eta \end{equation} by Lemma \ref{p5e}, and so $ ||\dot{\gamma}_{x,y}^E|d(x,y)-d(x,A_f)|\leq C\eta^{1/2} $ by Proposition \ref{p53a} (iv) and (\ref{55p}). By the assumptions, we get (i). We next prove (ii). By Proposition \ref{p53a}, we have $ ||\nabla f|^2(x)-\sin^2 d(x,A_f)|\leq C\delta^{1/2000n}, $ and so $ ||\nabla f|(x)-|\sin d(x,A_f)||\leq C\delta^{1/4000n}. $ Since $\sin d(x, A_f)\geq -C\delta^{1/100n}$ by Proposition \ref{p53a} (iv), we have $ ||\nabla f|(x)-\sin d(x,A_f)|\leq C\delta^{1/4000n}. $ Thus, we get \begin{equation}\label{55s} ||\nabla f|(x)-\sin d(x,y)|\leq C\eta \end{equation} by the assumption. On the other hand, by (i) and Lemma \ref{p5e}, we have $ |f(y)-f(x)\cos d(x,y)-\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\sin d(x,y)|\leq C\eta^{6/13}, $ and so \begin{equation}\label{55t} |\sin^2 d(x,y)-\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle\sin d(x,y)|\leq C\eta^{6/13} \end{equation} by (\ref{55q}) and (\ref{55r}). We consider the following two cases: \begin{itemize} \item $d(x,y)\leq \pi-\eta^{3/13}$, \item $d(x,y)> \pi-\eta^{3/13}$. \end{itemize} We first suppose that $d(x,y)\leq \pi-\eta^{3/13}$. We get $ |\sin d(x,y)-\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle|\leq C\eta^{3/13}$ by the assumption and (\ref{55t}). By (\ref{55s}), we get \begin{equation}\label{55u} |\nabla f|(x)-\langle\nabla f(x),\dot{\gamma}_{x,y}(0)\rangle \leq C\eta^{3/13}. \end{equation} We next suppose that $d(x,y)> \pi-\eta^{3/13}$. Then, we have $\cos d(x,A_f)\leq -1+C\eta^{6/13}$, and so $|\nabla f|(x)\leq C\eta^{3/13}$ by Proposition \ref{p53a} (iii) and (iv). Thus, we also get (\ref{55u}) for this case. By (i), (\ref{54m}) and Lemma \ref{p5e}, we have $$ \int_0^{d(x,y)} \left|\frac{d}{d s}\left(|\nabla f|^2(\gamma_{x,y}(s))-\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle^2\right)\right|\,d s\leq C\eta^{6/13}. $$ Thus, we get \begin{equation}\label{55v} |\nabla f|^2(\gamma_{x,y}(s))-\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle^2\leq C\eta^{3/13} \end{equation} for all $s\in[0,d(x,y)]$ by (\ref{55u}). Since $$ |\nabla f (\gamma_{x,y}(s))-\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle\dot{\gamma}_{x,y}(s)|^2=|\nabla f|^2(\gamma_{x,y}(s))-\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle^2, $$ we get $ |\nabla f (\gamma_{x,y}(s))-\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle\dot{\gamma}_{x,y}(s)|\leq C\eta^{3/26} $ by (\ref{55v}). Since we have \begin{equation*} |\langle\nabla f(\gamma_{x,y}(s)), \dot{\gamma}_{x,y}(s)\rangle+\cos d(x,y)\sin s-\sin d(x,y) \cos s|\leq C\eta^{3/13} \end{equation*} by (\ref{55r}), (\ref{55s}), (\ref{55u}), (i) and Lemma \ref{p5e}, we get \begin{equation*} |\nabla f (\gamma_{x,y}(s))-\sin (d(x,y)-s)\dot{\gamma}_{x,y}(s)|\leq C\eta^{3/26} \end{equation*} This gives (ii). \end{proof} The following lemma is crucial to show the almost Pythagorean theorem. \begin{Lem}\label{p54i} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $x\in D_f(p_f)\cap Q_f \cap R_f$, \item $y\in D_f(x)\cap D_f(p_f)\cap Q_f\cap R_f$, \item $z\in M$, \item $\eta>0$ with $\eta_0\leq\eta\leq L^{1/3n}$ and $T\in [0, d(x,y)]$. \end{itemize} Suppose \begin{itemize} \item $d(y,A_f)\leq C\eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$, \item $\gamma_{y,x}(s)\in I_z\setminus\{z\}$ for almost all $s\in [T,d(x,y)]$, \item $\int_T^{d(x,y)} |G_f^z(\gamma_{y,x}(s))|\,d s\leq C\eta^{3/26}$. \end{itemize} Then, we have $$ \left| d(z,x)^2-d_S(\Psi(z),\Psi(x))^2- d(z,\gamma_{y,x}(T))^2+d_S(\Psi(z),\Psi(\gamma_{y,x}(T)))^2 \right|\leq C\eta^{1/26}. $$ \end{Lem} \begin{proof} If $d(x,A_f)\leq\eta^{1/26}$, then $d(x,y)\leq C\eta^{1/26}$, and so $d(x,\gamma_{y,x}(T))\leq C\eta^{1/26}$. Thus, we immediately get the lemma by Lemma \ref{p54c00} if $d(x,A_f)\leq\eta^{1/26}$. In the following, we assume that $d(x,A_f)\geq\eta^{1/26}$. Take $u\in S^{n-p}$ with $f=\sum_{i=1}^{n-p+1}u_i f_i$, and $v\in S^{n-p}$ of Lemma \ref{p54g} (iii). Define $$ r(s):=d_S(\Psi (z),\gamma_v(s)). $$ Then, by the triangle inequality and Lemma \ref{p54g} (iii), we have \begin{equation}\label{55w} |r(s)-d_S(\Psi (z),\Psi(\gamma_{y,x}(s)))|\leq C\eta^{3/13}. \end{equation} There exist $w\in S^{n-p}$ and $x_1,x_2,x_3\in \mathbb{R}$ such that $w\perp \Span_{\mathbb{R}}\{u,v\}$, $x_1^2+x_2^2+x_3^2=1$ and $ \Psi(z)=x_1 u+x_2 v+ x_3 w. $ Then, \begin{equation}\label{55x} \cos r(s)=x_1\cos s+x_2\sin s \end{equation} by the definition of $\gamma_v$ in Lemma \ref{p54g} (iii), and so \begin{equation*} -x_1\sin s+x_2\cos s =\frac{d}{d s} \cos r(s) =-r'(s)\sin r(s). \end{equation*} Thus, we get \begin{equation}\label{55y} \begin{split} -r'(s)\sin r(s) \sin s=-x_1\sin^2 s+x_2\sin s\cos s=\cos r(s)\cos s-x_1 \end{split} \end{equation} by (\ref{55x}). Since $x_1=\Psi(z)\cdot u$ and $f(z)=\widetilde{\Psi}(z)\cdot u$, we have \begin{equation}\label{55z} |x_1-\cos d(z,A_f)|\leq C\delta^{1/1000n^2} \end{equation} by Proposition \ref{p53a} (iv) and Lemma \ref{p54a}. By Lemma \ref{p54g}, (\ref{55w}), (\ref{55y}) and (\ref{55z}), we get \begin{equation}\label{56a} \begin{split} &\Big|\Big(\cos d(\gamma_{y,x}(s),A_f)\cos d_S(\Psi(z),\Psi(\gamma_{y,x}(s)))-\cos d(z,A_f)\Big)d_S(\Psi(z),\Psi(\gamma_{y,x}(s)))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad +r'(s)r(s)\sin r(s) \sin s\Big|\leq C\eta^{3/13}. \end{split} \end{equation} Define $$ l(s):=d(z,\gamma_{y,x}(s)). $$ Then, we have $ l'(s)=\langle\dot{\gamma}_{z,\gamma_{y,x}(s)}(l(s)),\dot{\gamma}_{y,x}(s)\rangle $ for all $s\in [0,d(x,y)]$ with $\gamma_{y,x}(s)\in I_z\setminus\{z\}$, and so $ |l'(s)\sin s+\langle\dot{\gamma}_{z,\gamma_{y,x}(s)}(l(s)),\nabla f(\gamma_{y,x}(s))\rangle|\leq C\eta^{3/26} $ by Lemma \ref{p54h} (ii). Thus, for almost all $s\in [T,d(x,y)]$, we have \begin{equation}\label{56b} \begin{split} \Big|\langle\dot{\gamma}_{\gamma_{y,x}(s),z}(0),&\nabla f(\gamma_{y,x}(s))\rangle l(s)\sin d_S(\Psi(z),\Psi(\gamma_{y,x}(s)))\\ &-l'(s)l(s)\sin r(s)\sin s \Big|\leq C\eta^{3/26} \end{split} \end{equation} by (\ref{55w}). By the definition of $G_f^z$, (\ref{56a}) and (\ref{56b}), for almost all $s\in [T,d(x,y)]$, we have \begin{align*} \Big| G_f^z(\gamma_{y,x}(s))-l'(s)l(s)\sin r(s)\sin s+r'(s)r(s)\sin r(s) \sin s \Big|\leq C\eta^{3/26}. \end{align*} Thus, by the assumption, we get \begin{equation}\label{56c} \int_T^{d(x,y)}\left|\left(\frac{d}{d s}(l(s)^2-r(s)^2)\right)\sin r(s)\sin s\right|\,d s \leq C\eta^{3/26}. \end{equation} Define \begin{align*} I&:=\{s\in [T,d(x,y)]: \eta^{1/26}\leq s\leq \pi -\eta^{1/26}\text{ and }\eta^{1/26}\leq r(s) \leq\pi -\eta^{1/26} \}\\ II&:=[T,d(x,y)]\setminus I. \end{align*} Then, we have \begin{equation}\label{56ca} \int_I \left|\frac{d}{d s}(l(s)^2-r(s)^2)\right|\,d s \leq C\eta^{1/26} \end{equation} by (\ref{56c}). Let us estimate $H^1(II)$, where $H^1$ denotes the $1$-dimensional Hausdorff measure. Suppose that $$ \{s\in [T,d(x,y)]: r(s)<\eta^{1/26} \text{ or } r(s)>\pi-\eta^{1/26}\}\neq \emptyset, $$ and take arbitrary $s\in[T,d(x,y)]$ such that $r(s)<\eta^{1/26}$ or $r(s)>\pi-\eta^{1/26}$. Then, we have \begin{equation}\label{56d} ||\cos r(s)|-1|\leq C\eta^{1/13}. \end{equation} Note that we have $r(s)\leq \pi$ by $\diam (S^{n-p})=\pi$. By (\ref{55x}), we get \begin{equation}\label{56e} 1-C\eta^{1/13}\leq (x_1^2+x_2^2)^{1/2}\leq 1. \end{equation} Take $s_1\in[0,2\pi]$ such that \begin{align*} \cos s_1=&\frac{x_1}{(x_1^2+x_2^2)^{1/2}},\\ \sin s_1=&\frac{x_2}{(x_1^2+x_2^2)^{1/2}}. \end{align*} Then, we get $||\cos (s-s_1)|-1|\leq C\eta^{1/13}$ by (\ref{55x}), (\ref{56d}) and (\ref{56e}). Thus, there exists $n\in \mathbb{Z}$ such that $ |s-s_1-n\pi|\leq C\eta^{1/26}. $ Then, we have $|n|\leq 2$, and so $$ H^1\left(\{s\in [T,d(x,y)]: r(s)<\eta^{1/26} \text{ or } r(s)>\pi-\eta^{1/26}\}\right)\leq C\eta^{1/26}. $$ Note that we have $d(x,y)\leq d(x,A_f)+C\eta\leq \pi+C\eta$ by the assumption and Proposition \ref{p53a} (iv). Since we have $$ H^1\left(\{s\in [T,d(x,y)]: s<\eta^{1/26} \text{ or } s>\pi-\eta^{1/26}\}\right)\leq C\eta^{1/26}, $$ we get $H^1(II)\leq C\eta^{1/26}$. Since $\left|\frac{d}{d s}(l(s)^2-r(s)^2)\right|\leq C$ for almost all $s\in[T,d(x,y)]$, we get \begin{equation}\label{56f} \int_{II} \left|\frac{d}{d s}(l(s)^2-r(s)^2)\right|\,d s \leq C\eta^{1/26}. \end{equation} By (\ref{56ca}) and (\ref{56f}), we get \begin{equation*} \int_T^{d(x,y)}\left|\frac{d}{d s}(l(s)^2-r(s)^2)\right|\,d s \leq C\eta^{1/26}. \end{equation*} Thus, we have $ |l(d(x,y))^2-r(d(x,y))^2-l(T)^2+r(T)^2|\leq C\eta^{1/26}. $ By (\ref{55w}) and the definition of $l$, we get the lemma. \end{proof} \begin{Def} Take $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$. By Lemma \ref{p54f} and the Bishop-Gromov inequality, for any triple $(x_1,x_2,x_3)\in M\times M\times M$, we can take points $\tilde{x}_1\in D_{f_{x_1}}(p_{x_1})\cap Q_{f_{x_1}} \cap R_{f_{x_1}}\cap Q_f$, $\tilde{x}_2\in D_f(p_f)\cap Q_f \cap R_f\cap P_f^{\tilde{x}_1}$ and $\tilde{x}_3\in D_f(\tilde{x}_2)\cap D_f(p_f)\cap Q_f\cap R_f\cap C_f^{\tilde{x}_1}(\tilde{x}_2)$ such that $d(x_1,\tilde{x}_1)\leq C\delta^{1/100n}$, $d(x_2,\tilde{x}_2)\leq C\eta_0$, $d(x_3,\tilde{x}_3)\leq C\eta_0$. We call the triple $(\tilde{x}_1,\tilde{x}_2,\tilde{x}_3)$ a ``{\it $\Pi$-triple for $(x_1,x_2,x_3,f)$}''. \end{Def} \begin{Lem}\label{ptrp} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $x,y,z\in M$, \item $\eta>0$ with $\eta_0\leq\eta\leq L^{1/3n}$ and $T\in [0, d(x,y)]$. \end{itemize} Take a $\Pi$-triple $(\tilde{z},\tilde{x},\tilde{y})$ for $(z,x,y,f)$. Suppose \begin{itemize} \item $d(y,A_f)\leq C\eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$, \item $d(\tilde{z},\gamma_{\tilde{y},\tilde{x}}(s))\leq \pi$ for all $s\in[T,d(\tilde{x},\tilde{y})]$. \end{itemize} Then, we have $$ \left| d(\tilde{z},\tilde{x})^2-d_S(\Psi(\tilde{z}),\Psi(\tilde{x}))^2- d(\tilde{z},\gamma_{\tilde{y},\tilde{x}}(T))^2+d_S(\Psi(\tilde{z}),\Psi(\gamma_{\tilde{y},\tilde{x}}(T)))^2 \right|\leq C\eta^{1/26}. $$ \end{Lem} \begin{proof} We have $(G^{\tilde{z}}_f H^{\tilde{z}})(\gamma_{\tilde{y},\tilde{x}}(s))=G^{\tilde{z}}_f(\gamma_{\tilde{y},\tilde{x}}(s))$ for all $s\in[T,d(\tilde{x},\tilde{y})]$. Thus, we get the lemma immediately by the definition of $C_f^{\tilde{z}}(\tilde{x})$ and Lemma \ref{p54i}. \end{proof} The following lemma guarantees that if the images of two points in $M$ under $\Phi_f$ are close to each other in $S^{n-p}\times A_f$, then their distance in $M$ are close to each other under some assumptions. \begin{Lem}\label{p54j} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $x,y,z\in M$, \item $\eta>0$ with $\eta_0\leq\eta\leq L^{1/3n}$. \end{itemize} Suppose \begin{itemize} \item $d(x,A_f)\leq \pi- \frac{1}{C}\eta^{1/78}$ and $d(z,A_f)\leq \pi- \frac{1}{C}\eta^{1/78}$, \item $d(y,A_f)\leq C\eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$ and $|d(z,A_f)-d(z,y)|\leq C\eta$ \item $d_S(\Psi(x),\Psi(z))\leq C\eta$. \end{itemize} Then, we have $ d(x,z)\leq C\eta^{1/52}. $ \end{Lem} \begin{proof} We first show the following claim. \begin{Clm}\label{p54k} If $x,y,z\in M$ satisfies: \begin{itemize} \item $d(x,A_f)\leq \frac{1}{2}\pi- \frac{1}{C}\eta^{1/2}$ and $d(z,A_f)\leq \frac{1}{2}\pi- \frac{1}{C}\eta^{1/2}$, \item $d(y,A_f)\leq C\eta$, \item $|d(x,A_f)-d(x,y)|\leq C\eta$ and $|d(z,A_f)-d(z,y)|\leq C\eta$, \item $d_S(\Psi(x),\Psi(z))\leq C\eta^{1/52}$. \end{itemize} Then, we have $ d(x,z)\leq C\eta^{1/52}. $ \end{Clm} \begin{proof}[Proof of Claim \ref{p54k}] Take $u\in S^{n-p}$ with $f=\sum_{i=1}^{n-p+1} u_i f_i$. By the assumptions and Lemma \ref{p54c0}, we have \begin{align*} d_S(u,\Psi(y))\leq& C\eta,\\ |d_S(\Psi(z),u)-d(z,A_f)|\leq &C\delta^{1/2000n^2}. \end{align*} Since we have $|d(z,A_f)-d(z,y)|\leq C\eta$ by the assumptions, we get \begin{equation}\label{57a0} |d_S(\Psi(z),\Psi(y))-d(z,y)|\leq C\eta. \end{equation} Take a $\Pi$-triple $(\tilde{z},\tilde{x},\tilde{y})$ for $(z,x,y,f)$. Then, we have \begin{align*} d(\tilde{z},\gamma_{\tilde{y},\tilde{x}}(s)) \leq d(z,y)+d(y,x)+C\eta_0 \leq \pi-\frac{1}{C}\eta^{1/2}+C\eta\leq \pi \end{align*} for all $s\in[0,d(\tilde{x},\tilde{y})]$, and so $$ \left| d(z,x)^2-d_S(\Psi(z),\Psi(x))^2- d(z,y)^2+d_S(\Psi(z),\Psi(y))^2 \right|\leq C\eta^{1/26} $$ by Lemmas \ref{p54c00} and \ref{ptrp}. Thus, we get $d(x,z)\leq C\eta^{1/52}$ by (\ref{57a0}). \end{proof} Let us suppose that $x,y,z\in M$ satisfies the assumptions of the lemma. Take $u\in S^{n-p}$ with $f=\sum_{i=1}^{n-p+1}u_i f_i$. By the assumptions and Lemma \ref{p54c0}, we have \begin{equation}\label{57a1} |d(x,A_f)-d(z,A_f)| \leq |d_S(\Psi(x),u)-d(\Psi(z),u)|+C\delta^{1/2000n^2} \leq C\eta \end{equation} Thus, if either $d(x,A_f)\leq \eta^{1/26}$ or $d(z,A_f)\leq \eta^{1/26}$ holds, then the lemma is trivial. In the following, we assume $d(x,A_f)\geq \eta^{1/26}$ and $d(z,A_f)\geq \eta^{1/26}$. Take a $\Pi$-triple $(\tilde{z},\tilde{x},\tilde{y})$ for $(z,x,y,f)$. By Lemma \ref{p54g} (iii), we can take $v_1,v_2\in S^{n-p}$ such that $u\cdot v_i=0$ ($i=1,2$), \begin{equation}\label{57a} d_S(\Psi(\gamma_{\tilde{y},\tilde{x}}(s)),\gamma_{v_1}(s))\leq C\eta^{3/13} \end{equation} for all $s\in [0,d(\tilde{y},\tilde{x})]$ and \begin{equation}\label{57b} d_S(\Psi(\gamma_{\tilde{y},\tilde{z}}(s)),\gamma_{v_2}(s))\leq C\eta^{3/13} \end{equation} for all $s\in [0,d(\tilde{y},\tilde{z})]$, where $\gamma_{v_i}(s):=(\cos s) u+(\sin s) v_i \in S^{n-p}$ ($i=1,2$). By the assumptions and (\ref{57a1}), we get \begin{equation}\label{57b1} |d(\tilde{y},\tilde{x})-d(\tilde{y},\tilde{z})|\leq C\eta, \end{equation} and so \begin{align*} \sin d(\tilde{y},\tilde{x}) |v_1- v_2| \leq &C d_S(\gamma_{v_1}(d(\tilde{y},\tilde{x})),\gamma_{v_2}(d(\tilde{y},\tilde{x})))\\ \leq &Cd_S(\Psi(\tilde{x}),\Psi(\tilde{z}))+C\eta^{3/13} \leq C\eta^{3/13} \end{align*} by (\ref{57a}) and (\ref{57b}). By $\eta^{1/26}\leq d(x,A_f)\leq \pi-\frac{1}{C}\eta^{1/78}$, we have $\sin d(\tilde{y},\tilde{x})\geq \frac{1}{C}\eta^{1/26}$. Thus, we get $ |v_1-v_2|\leq C\eta^{1/26}. $ This gives \begin{equation}\label{57c} d_S(\gamma_{v_1}(s),\gamma_{v_2}(s))\leq C\eta^{1/26}. \end{equation} for all $s\in \mathbb{R}$. Put $ a:=\gamma_{\tilde{y},\tilde{x}}\left(d(\tilde{y},\tilde{x})/2\right)$ and $ b:=\gamma_{\tilde{y},\tilde{z}}\left(d(\tilde{y},\tilde{z})/2\right).$ By (\ref{57a}), (\ref{57b}), (\ref{57b1}) and (\ref{57c}), we have $ d_S(\Psi(a),\Psi(b))\leq C\eta^{1/26}. $ Moreover, other assumptions of Claim \ref{p54k} hold for the pair $(a,y,b)$ by Lemma \ref {p54g} (i), and so $ d(a,b)\leq C\eta^{1/52}. $ Therefore, we have $$ d(\tilde{z}, \gamma_{\tilde{y},\tilde{x}}(s))\leq d(\tilde{z},b)+d(a,b)+d(\gamma_{\tilde{y},\tilde{x}}(s),a)\leq \frac{1}{2}d(\tilde{x},\tilde{y})+\frac{1}{2}d(\tilde{z},\tilde{y})+C\eta^{1/52}\leq \pi $$ for all $s\in[0,d(\tilde{y},\tilde{x})]$, and so $d(\tilde{x},\tilde{z})\leq C\eta^{1/52}$ similarly to Claim \ref{p54k}. Thus, we get the lemma. \end{proof} Let us show the almost Pythagorean theorem for the special case. Recall that we defined $\eta_1:=\eta_0^{1/26}$. \begin{Lem}\label{p54l} Take \begin{itemize} \item $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$, \item $x,y,z,w\in M$, \item $\eta>0$ with $\eta_1\leq \eta\leq L^{1/3n}$. \end{itemize} Suppose \begin{itemize} \item $d(x,z)\leq C\eta$, \item $d(x,A_f)\leq \pi- \frac{1}{C}\eta^{1/2}$ and $d(z,A_f)\leq \pi- \frac{1}{C}\eta^{1/2}$, \item $d(y,A_f)\leq C\eta_0$ and $d(w,A_f)\leq C\eta_0$, \item $|d(x,A_f)-d(x,y)|\leq C\eta_0$ and $|d(z,A_f)-d(z,w)|\leq C\eta_0$. \end{itemize} Then, we have $$ |d(x,z)^2-d_S(\Psi(x),\Psi(z))^2-d(y,w)^2|\leq C\eta_1. $$ \end{Lem} \begin{proof} By Lemma \ref{p54c0}, we have \begin{equation}\label{57d0} d_S(\Psi(y),\Psi(w)) \leq d(y,A_f)+d(w,A_f)+C\delta^{1/2000n^2} \leq C\eta_0. \end{equation} Put $a_0:=x$ and $b_0:=z$. In the following, we define $a_{i},b_{i}\in M$ ($i=1,2,3$) so that \begin{itemize} \item[(i)] $d(a_{i},b_{i})\leq C\eta^{1/2}$, \item[(ii)] $|d(a_{i},A_f)-d(a_{i},y)|\leq C\eta_0$ and $|d(b_{i},A_f)-d(b_{i},w)|\leq C\eta_0$, \item[(iii)] $d(a_{i},A_f)\leq \frac{3-i}{3}\pi+C\eta_0$ and $d(b_{i},A_f)\leq \frac{3-i}{3}\pi+C\eta_0$, \item[(iv)] $|d(a_{i+1},b_{i+1})^2-d_S(\Psi(a_{i+1}),\Psi(b_{i+1}))^2-d(a_{i},b_{i})^2+d_S(\Psi(a_{i}),\Psi(b_{i}))^2|\leq C\eta_0^{1/26}$ ($i=0,1,2$), \item[(v)] $d(y,a_3)\leq C\eta_0$ and $d(w,b_3)\leq C\eta_0$. \end{itemize} If we succeed in defining such $a_i$ and $b_i$, we have $$ |d(x,z)^2-d_S(\Psi(x),\Psi(z))^2-d(y,w)^2+d_S(\Psi(y),\Psi(w))^2|\leq C\eta_0^{1/26}=C\eta_1 $$ by (iv) and (v), and so we get the lemma by (\ref{57d0}). Take arbitrary $i\in\{0,1,2\}$ and suppose that we have chosen $a_i,b_i\in M$ such that (i), (ii) and (iii) hold if $i\geq 1$. Let us define $a_{i+1},b_{i+1}\in M$ that satisfy our properties. Take a $\Pi$-triple $(\tilde{b}_i,\tilde{a}_i, \tilde{y}_i)$ for $(b_i,a_i,y,f)$. Define $$ a_{i+1}:=\gamma_{\tilde{y}_i,\tilde{a}_i}\left(\frac{2-i}{3-i}d(\tilde{y}_i,\tilde{a}_i)\right). $$ Since $$d(\tilde{b}_i, \gamma_{\tilde{y}_i,\tilde{a}_i}(s))\leq d(\tilde{a}_i,\tilde{b}_i) +d(\tilde{a}_i,\gamma_{\tilde{y}_i,\tilde{a}_i}(s))\leq \frac{\pi}{3}+C\eta^{1/2}$$ for all $s\in\left[\frac{2-i}{3-i}d(\tilde{y}_i,\tilde{a}_i),d(\tilde{y}_i,\tilde{a}_i)\right]$ by the assumptions, we get \begin{equation}\label{57d} |d(a_{i+1},b_{i})^2-d_S(\Psi(a_{i+1}),\Psi(b_{i}))^2-d(a_{i},b_{i})^2+d_S(\Psi(a_{i}),\Psi(b_{i}))^2|\leq C\eta_0^{1/26} \end{equation} by Lemmas \ref{p54c00} and \ref{ptrp}. Take a $\Pi$-triple $(\overline{a}_{i+1},\overline{b}_i,\overline{w}_i)$ for $(a_{i+1},b_i,w,f)$. Define $$ b_{i+1}:=\gamma_{\overline{w}_i,\overline{b}_i}\left(\frac{2-i}{3-i}d(\overline{w}_i,\overline{b}_i)\right). $$ Since $$ d(\overline{a}_{i+1},\gamma_{\overline{w}_i,\overline{b}_i}(s))\leq d(\overline{a}_{i+1},a_i)+d(a_i,\overline{b}_i)+d(\overline{b_i},\gamma_{\overline{w}_i,\overline{b}_i}(s))\leq \frac{2}{3}\pi +C\eta^{1/2} $$ for all $s\in\left[\frac{2-i}{3-i}d(\overline{w}_i,\overline{b}_i),d(\overline{w}_i,\overline{b}_i)\right]$ by the assumptions, we get \begin{equation}\label{57e} |d(a_{i+1},b_{i+1})^2-d_S(\Psi(a_{i+1}),\Psi(b_{i+1}))^2-d(a_{i+1},b_{i})^2+d_S(\Psi(a_{i+1}),\Psi(b_{i}))^2|\leq C\eta_0^{1/26} \end{equation} by Lemmas \ref{p54c00} and \ref{ptrp}. By (\ref{57d}) and (\ref{57e}), we get (iv). By the assumptions and Lemma \ref{p54g}, we get (ii) for $a_{i+1}$ and $b_{i+1}$. By the assumptions, we have \begin{align*} d(a_{i+1},A_f) \leq &d(a_{i+1},\tilde{y}_i)+d(y,A_f)+C\eta_0\\ =&\frac{2-i}{3-i}d(\tilde{a}_{i},\tilde{y}_i)+C\eta_0 \leq \frac{2-i}{3}\pi+C\eta_0. \end{align*} Similarly, we have $d(b_{i+1},A_f)\leq \frac{2-i}{3}\pi+C\eta_0$. Thus, we get (iii) for $a_{i+1}$ and $b_{i+1}$. By definition, we have $ a_3=\tilde{y}_3$ and $b_3=\overline{w}_3. $ Thus, we get (v). In the following, we prove (i) for $a_{i+1}$ and $b_{i+1}$. If $d(a_i,y)\leq \eta_0^{1/26}$, then we have \begin{align*} d(b_i,w)\leq d(b_i,A_f)+C\eta_0 \leq d(a_i,A_f)+C\eta^{1/2} \leq C\eta^{1/2}, \end{align*} and so $ d(y,w)\leq C\eta^{1/2}$, $d(a_{i+1},y)\leq C\eta^{1/2}$ and $d(b_{i+1},w)\leq C\eta^{1/2}. $ Then, we have $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$. Similarly, if $d(b_i,w)\leq \eta_0^{1/26}$, then $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$. Thus, in the following, we assume that $d(a_i,y)\geq \eta_0^{1/26}$ and $d(b_i,w)\geq \eta_0^{1/26}$. By Lemma \ref{p54g}, we can take $u,v_1,v_2\in S^{n-p}$ such that $f=\sum_{j=1}^{n-p+1}u_j f_j$, $ u\cdot v_k=0$ ($k=1,2$), \begin{equation}\label{57f} d_S(\Psi(\gamma_{\tilde{y}_i,\tilde{a}_i}(s)),\gamma_{v_1}(s))\leq C\eta_0^{3/13} \end{equation} for all $s\in [0,d(\tilde{a}_i,\tilde{y}_i)]$ and \begin{equation}\label{57g} d_S(\Psi(\gamma_{\overline{w}_i,\overline{b}_i}(s)),\gamma_{v_2}(s))\leq C\eta_0^{3/13} \end{equation} for all $s\in [0,d(\overline{b}_i,\overline{w}_i)]$, where $\gamma_{v_k}(s):=(\cos s) u+(\sin s) v_k\in S^{n-p}$ ($k=1,2$). Since $$|d(\tilde{a}_i,\tilde{y}_i)-d(\overline{b}_i,\overline{w}_i)|\leq |d(a_i,A_f)-d(b_i,A_f)|+C\eta_0\leq d(a_i,b_i)+C\eta_0,$$ we have \begin{equation}\label{57h} \left|d_S(\Psi(\tilde{a}_i),\Psi(\overline{b}_i)) -d_S\left(\gamma_{v_1}(l_i),\gamma_{v_2} (l_i)\right) \right|\leq d(a_i,b_i)+C\eta_0^{3/13} \end{equation} and \begin{equation}\label{57i} \left|d_S(\Psi(a_{i+1}),\Psi(b_{i+1})) -d_S\left(\gamma_{v_1}\left(\frac{2-i}{3-i}l_i\right),\gamma_{v_2} \left(\frac{2-i}{3-i}l_i\right)\right) \right|\leq d(a_i,b_i)+C\eta_0^{3/13} \end{equation} by (\ref{57f}) and (\ref{57g}), where we put $l_i:=d(\tilde{a}_i,\tilde{y}_i)$. By (\ref{57h}) and Lemma \ref{p54c00}, we get \begin{equation}\label{57i1} |v_1-v_2|\sin l_i \leq C d_S\left(\gamma_{v_1}(l_i),\gamma_{v_2} (l_i)\right) \leq Cd(a_i,b_i)+C\eta_0^{3/13}. \end{equation} We first suppose that $d(a_i,y)\leq \pi/6$. Since $l_i\leq \pi/2$, we have $$\sin \left(\frac{2-i}{3-i}l_i\right) \leq \sin l_i,$$ and so \begin{align*} d_S(\Psi(a_{i+1}),\Psi(b_{i+1})) \leq& d_S\left(\gamma_{v_1}\left(\frac{2-i}{3-i}l_i\right),\gamma_{v_2} \left(\frac{2-i}{3-i}l_i\right)\right)+C\eta^{1/2}\\ \leq& C|v_1-v_2|\sin \left(\frac{2-i}{3-i}l_i\right)+C\eta^{1/2}\\ \leq& C|v_1-v_2|\sin l_i+C\eta^{1/2}\\ \leq& C d_S(\Psi(\tilde{a}_i),\Psi(\overline{b}_i))+C\eta^{1/2} \leq C\eta^{1/2} \end{align*} by (\ref{57h}), (\ref{57i}) and $d(a_i,b_i)\leq C\eta^{1/2}$. Thus, we get $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$ by (iv). We next suppose that $\pi/6\leq d(a_i,y)\leq 5\pi/6$. By (\ref{57i1}) and $d(a_i,b_i)\leq C\eta^{1/2}$, we have $|v_1-v_2|\leq C\eta^{1/2}$. Thus, we get $ d_S(\Psi(a_{i+1}),\Psi(b_{i+1})) \leq C\eta^{1/2} $ by (\ref{57i}). Thus, we get $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$ by (iv). If $i\geq 1$, we have $d(a_i,y)\leq 5\pi/6$, and so we get $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$ by the above two cases. Finally, we suppose that $i=0$ and $d(x,y)\geq 5\pi/6$. By (\ref{57i1}) and $d(a_0,b_0)\leq C\eta$, we have $|v_1-v_2|\sin l_0\leq C\eta$. By the definition of $l_0$, we have $|l_0-d(x,y)|\leq C\eta_0.$ Thus, we have $\sin l_0\geq \frac{1}{C}(\pi- l_0)\geq \frac{1}{C}\eta^{1/2}$, and so we get $|v_1-v_2|\leq C\eta^{1/2}$. This gives $ d_S(\Psi(a_{i+1}),\Psi(b_{i+1})) \leq C\eta^{1/2} $ by (\ref{57i}). Thus, $d(a_{i+1},b_{i+1})\leq C\eta^{1/2}$ by (iv). Therefore, we have (i) for all cases, and we get the lemma. \end{proof} Let us show that the map $\Phi_f\colon M\to S^{n-p}\times A_f,\,x\mapsto (\Psi(x), a_f(x))$ is almost surjective. \begin{Prop}\label{p54m} Take $f\in \Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$. For any $(v,a)\in S^{n-p}\times A_f$, there exists $x\in M$ such that $d(\Phi_f(x),(v,a))\leq C\eta_1^{1/2}$ holds. \end{Prop} \begin{proof} Take arbitrary $(v,a)\in S^{n-p}\times A_f$. Take $u\in S^{n-p}$ with $f=\sum_{i=1}^{n-p+1} u_i f_i$. Since there exists $\tilde{v}\in S^{n-p}$ such that $d_S(u,\tilde{v})\leq \pi-\eta_1^{1/2}$ and $d_S(v,\tilde{v})\leq \eta_1^{1/2}$, it is enough to prove the proposition assuming $d_S(u,v)\leq \pi-\eta_1^{1/2}$. Put $F_v:=\sum_{i=1}^{n-p+1}v_i f_i$. Then, $|F_v(p_{F_v})-1|\leq C\delta^{1/800n}$ and $A_{F_v}=\{x\in M:|F_v(x)-1|\leq \delta^{1/900n}\}$ by Proposition \ref{p53a}. In the following, we show that $a_v:=a_{F_v}(a)\in A_{F_v}$ has the desired property. By Lemma \ref{p54c0}, we get \begin{align} \notag d_S(\Psi(a),u)\leq &C\delta^{1/2000n^2},\\ \label{57n} d_S(\Psi(a_v),v)\leq &C\delta^{1/2000n^2}. \end{align} Thus, by Lemma \ref{p54c0}, we get \begin{align*} |d(a,a_v)-d(a_f(a_v),a_v)|=&|d(a,A_{F_v})-d(a_v,A_f)|\\ \leq& |d_S(\Psi(a),v)-d_S(\Psi(a_v),u)|+C\delta^{1/2000n^2}\\ \leq& C\delta^{1/2000n^2}\leq \eta_0 \end{align*} and $$ d(a_v,A_f)\leq d_S(\Psi(a_v),u)+C\delta^{1/2000n^2} \leq d_S(u,v)+C\delta^{1/2000n^2}\leq \pi-\frac{1}{2}\eta_1^{1/2}. $$ Since we have $d(a_v,A_f)=d(a_v,a_f(a_v))$, we get \begin{align*} |d(a_v,A_f)-d(a_v,a)|\leq |d(a_v,A_f)-d(a_v,a_f(a_v))|+\eta_0=\eta_0, \end{align*} and so we get \begin{equation}\label{57o} d(a,a_f(a_v))\leq C\eta_1^{1/2} \end{equation} by Lemma \ref{p54l} putting $x=z=a_v$, $y=a$ and $w=a_f(a_v)$. By (\ref{57n}) and (\ref{57o}), putting $x=a_v$, we get the proposition. \end{proof} Now, we are in position to show $|\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)\leq \pi+L$ under the assumption of Lemma \ref{p54d}. Note that we defined $\eta_2=\eta_1^{1/78}$ and $L=\eta_2^{1/150}$. \begin{Lem}\label{p54n} Take $y_1\in M$, $\tilde{y}_1\in D_{f_{y_1}}(p_{y_1})\cap R_{f_{y_1}}\cap Q_{f_{y_1}}$ with $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$ and $y_2\in D_{f_{y_1}}(\tilde{y}_1)$. Let $\{E_1,\ldots,E_n\}$ be a parallel orthonormal basis of $TM$ along $\gamma_{\tilde{y}_1,y_2}$ in Lemma \ref{p5e} for $f_{y_1}$. Then, $|\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)\leq \pi+ L$ and $$ ||\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)-d_S(\Psi(y_1),\Psi(y_2))|\leq CL. $$ \end{Lem} \begin{proof} We immediately get the second assertion by the first assertion and Lemma \ref{p54d}. Let us show the first assertion by contradiction. Suppose that $|\dot{\gamma}_{\tilde{y}_1,y_2}^E|d(\tilde{y}_1,y_2)>\pi+ L.$ Put \begin{align*} f:=-f_{y_1},\,\gamma:=\gamma_{\tilde{y}_1,y_2},\,s_0:=\frac{1}{|\dot{\gamma}^E|}\eta_2^{1/104}\text{ and }s_1:=\frac{1}{|\dot{\gamma}^E|}(\pi+L). \end{align*} Take $k\in \mathbb{N}$ to be $(s_1-s_0)/\eta_2^{-1}<k\leq (s_1-s_0)/\eta_2^{-1}+1,$ and put $ t_j:= s_0+ (s_1-s_0)j/k $ for each $j\in\{0,\ldots,k\}$. Note that we have $t_0=s_0$, $t_k=s_1$ and \begin{align}\label{57p0} \frac{1}{C}\eta_2^{-1}\leq k\leq C\eta_2^{-1}. \end{align} For all $s\in[s_0,s_1]$, we have \begin{align*} \cos d_S(\Psi(y_1),\Psi(\gamma(s))) \leq \cos (|\dot{\gamma}^E| s)+C\delta^{1/2000n^2} \leq 1-\frac{1}{C}\eta_2^{1/52} \end{align*} for all $s\in[s_0,s_1]$ by Lemma \ref{p54d}. Since $f(\gamma(s))=-|\widetilde{\Psi}|(\gamma(s))\cos d_S(\Psi(y_1),\Psi(\gamma(s)))$ by the definitions of $f_{y_1}$ and $f$, we get $ f(\gamma(s))\geq -1+\frac{1}{C}\eta_2^{1/52} $ for all $s\in[s_0,s_1]$ by Lemma \ref{p54a}. This gives \begin{equation}\label{57p} d(\gamma(s),A_f)\leq \pi-\frac{1}{C}\eta_2^{1/104} \end{equation} $s\in[s_0,s_1]$ by Proposition \ref{p53a}. By the definition of $t_j$ and (\ref{57p}), we have \begin{align} \label{57p11} d(\gamma(t_j),\gamma(t_{j+1}))\leq& \eta_2,\\ \notag d(\gamma(t_{j+\sigma}),A_f)\leq & \pi-\frac{1}{C}\eta_2^{1/104}\leq \pi-\eta_2^{1/2} \end{align} for all $j\in\{0,\ldots,k-1\}$ and $\sigma\in\{0,1\}$, and so we get \begin{equation}\label{57q} |d(\gamma(t_j),\gamma(t_{j+1}))^2-d_S(\Psi(\gamma(t_j)),\Psi(\gamma(t_{j+1})))^2-d(a_f(\gamma(t_j)),a_f(\gamma(t_{j+1})))^2|\leq C\eta_1 \end{equation} by Lemma \ref{p54l}. In particular, we get \begin{equation}\label{57r} d(a_f(\gamma(t_j)),a_f(\gamma(t_{j+1})))\leq C\eta_2 \end{equation} by (\ref{57p11}). Take $j_0\in\{1,\ldots, k-1\}$ to be $ |\dot{\gamma}^E|t_{j_0}< \pi \leq |\dot{\gamma}^E|t_{j_0+1}. $ Since $$ ||\dot{\gamma}^E|s-d_S(\Psi(y_1),\Psi(\gamma(s)))|\leq C\delta^{1/4000n^2} $$ for all $s\in\left[0,\frac{1}{|\dot{\gamma}^E|}\pi\right]$ by Lemma \ref{p54d}, we get \begin{equation}\label{57s} \begin{split} d_S(\Psi(\gamma(t_j)),\Psi(\gamma(t_{j+1}))) \geq &d_S(\Psi(y_1),\Psi(\gamma(t_{j+1})))- d_S(\Psi(y_1),\Psi(\gamma(t_{j})))\\ \geq &|\dot{\gamma}^E|(t_{j+1}-t_j)-C\delta^{1/4000n^2} \end{split} \end{equation} for all $j\in \{0,\ldots,j_0-1\}$. Since $$ |2\pi-|\dot{\gamma}^E|s-d_S(\Psi(y_1),\Psi(\gamma(s)))|\leq C\delta^{1/4000n^2} $$ for all $s\in\left[\frac{1}{|\dot{\gamma}^E|}\pi,s_1\right]$ by Lemma \ref{p54d}, we get \begin{equation}\label{57t} d_S(\Psi(\gamma(t_j)),\Psi(\gamma(t_{j+1}))) \geq |\dot{\gamma}^E|(t_{j+1}-t_j)-C\delta^{1/4000n^2} \end{equation} for all $j\in \{j_0+1,\ldots,k-1\}$. By (\ref{57q}), (\ref{57s}) and (\ref{57t}), we get \begin{equation}\label{57u} d(a_f(\gamma(t_j)),a_f(\gamma(t_{j+1})))^2 \leq d(\gamma(t_j),\gamma(t_{j+1}))^2-|\dot{\gamma}^E|^2(t_{j+1}-t_j)^2+C\eta_1 \end{equation} for all $j\in\{0,\ldots,k-1\}\setminus \{j_0\}$. Since we have \begin{align*} d_S(\Psi(\gamma(s_l)),\Psi(p_f)) \leq d(\gamma(s_l),A_f)+C\delta^{1/2000n^2} \leq \pi-\frac{1}{C}\eta_2^{1/104} \end{align*} for each $l=0,1$ by Lemma \ref{p54c0}, Corollary \ref{p54c01} and (\ref{57p}), we can take a curve $\beta\colon[0,K]\to S^{n-p}$ in $S^{n-p}$ with unit speed ($K$ is some constant) such that \begin{align*} \beta(0)=&\Psi(\gamma(s_0)),\\ \beta(K)=&\Psi(\gamma(s_1)),\\ |d_S(\Psi(\gamma(s_0)),\Psi(\gamma(s_1)))-K|\leq &C\eta_2^{1/104},\\ d_S(\beta(s),\Psi(p_f))\leq &\pi-\frac{1}{C}\eta_2^{1/104} \end{align*} for all $s\in[0,K]$. Note that we can find such $\beta$ by taking an almost shortest pass in $\left\{u\in S^{n-p}: d(u,\Psi(p_f))\leq \pi-\frac{1}{C}\eta_2^{1/104}\right\}.$ By Proposition \ref{p54m}, there exists $x_j\in M$ such that \begin{equation}\label{57v} d\left(\Phi_f(x_j),\left(\beta\left(\frac{j}{k}K\right),a_f(\gamma(t_j))\right)\right)\leq C\eta_1^{1/2} \end{equation} for each $j\in\{0,\ldots,k\}$. Moreover, we can take $x_0:=\gamma(s_0)$ and $x_k:=\gamma(s_1)$. By (\ref{57p0}), (\ref{57r}), (\ref{57v}), Lemma \ref{p54c0} and Corollary \ref{p54c01}, we have \begin{align} \notag d(a_f(x_j),a_f(x_{j+1}))\leq &C\eta_2,\\ \label{57v1}d_S(\Psi(x_j),\Psi(x_{j+1}))\leq &\frac{1}{k}K+C\eta_1^{1/2}\leq C\eta_2,\\ \label{57v2}d(x_j,A_f)\leq &d_S(\Psi(x_j),\Psi(p_f))+C\delta^{1/2000n^2}\\ \notag \leq &d_S\left(\beta\left(\frac{j}{k}K\right),\Psi(p_f)\right)+C\eta_1^{1/2} \leq\pi-\frac{1}{C}\eta_2^{1/104} \end{align} for all $j$, and so \begin{equation}\label{57v3} d(x_j,x_{j+1})\leq C\eta_2^{1/52} \end{equation} by Lemma \ref{p54j} putting $x=x_j, y=a_f(x_j), z=x_{j+1}$ and $\eta=\eta_2$. By (\ref{57v2}), (\ref{57v3}) and Lemma \ref{p54l} putting $x=x_j, y=a_f(x_j), z=x_{j+1}, w=a_f(x_{j+1})$ and $\eta=\eta_2^{1/52}$, we get \begin{equation}\label{57w} |d(x_j,x_{j+1})^2-d_S(\Psi(x_j),\Psi(x_{j+1}))^2-d(a_f(x_j),a_f(x_{j+1}))^2|\leq C\eta_1 \end{equation} for all $j\in\{0,\ldots,k-1\}$. By (\ref{57u}), (\ref{57v1}) and (\ref{57w}), we have \begin{equation}\label{57x} \begin{split} d(x_j,x_{j+1})^2 \leq &\frac{1}{k^2}K^2+d(a_f(x_j),a_f(x_{j+1}))^2+C\eta_1^{1/2}\\ \leq &\frac{1}{k^2}K^2+d(\gamma(t_j),\gamma(t_{j+1}))^2-|\dot{\gamma}^E|^2(t_{j+1}-t_j)^2+C\eta_1^{1/2} \end{split} \end{equation} for all $j\in\{0,\ldots,k-1\}\setminus\{j_0\}$. Since $K\leq \pi+C\eta_2^{1/104}$, we have \begin{equation}\label{57y} \frac{1}{k^2}K^2\leq \frac{\pi^2}{k^2}+\frac{C}{k^2}\eta_2^{1/104}. \end{equation} Since $$|\dot{\gamma}^E|(t_{j+1}-t_j)=\frac{|\dot{\gamma}^E|}{k}(s_1-s_0) =\frac{1}{k}(\pi+L-\eta_2^{1/104}) \geq\frac{1}{k}\left(\pi+\frac{1}{2}L\right),$$ we have \begin{equation}\label{57z} |\dot{\gamma}^E|^2(t_{j+1}-t_j)^2\geq \frac{\pi^2}{k^2}+\frac{1}{k^2}L \end{equation} for all $j\in\{0,\ldots,k-1\}$. By (\ref{57y}) and (\ref{57z}), we get $$ |\dot{\gamma}^E|^2(t_{j+1}-t_j)^2-\frac{1}{k^2}K^2\geq \frac{1}{k^2}L-\frac{C}{k^2}\eta_2^{1/104} \geq \frac{1}{2k^2}L $$ for all $j\in\{0,\ldots,k-1\}$. Thus, by (\ref{57x}), we have \begin{equation*} d(x_j,x_{j+1})^2 \leq d(\gamma(t_j),\gamma(t_{j+1}))^2-\frac{1}{2k^2}L+C\eta_1^{1/2} \leq d(\gamma(t_j),\gamma(t_{j+1}))^2-\frac{1}{4k^2}L \end{equation*} for all $j\in\{0,\ldots,k-1\}\setminus\{j_0\}$. Since $d(\gamma(t_j),\gamma(t_{j+1}))+d(x_j,x_{j+1})\leq 1$, we get \begin{equation}\label{58a} \frac{1}{4k^2}L\leq d(\gamma(t_j),\gamma(t_{j+1}))^2-d(x_j,x_{j+1})^2 \leq d(\gamma(t_j),\gamma(t_{j+1}))-d(x_j,x_{j+1}) \end{equation} $j\in\{0,\ldots,k-1\}\setminus\{j_0\}$. By (\ref{57p0}), (\ref{57v3}) and (\ref{58a}), we get \begin{equation*} \begin{split} d(x_0,x_k)\leq \sum_{i=0}^{k-1}d(x_j,x_{j+1}) \leq &\sum_{i=0}^{k-1}d(\gamma(t_j),\gamma(t_{j+1}))-\frac{k-1}{4k^2}L+d(x_{j_0},x_{j_0+1})\\ \leq& d(x_0,x_k)-\frac{1}{8k}L. \end{split} \end{equation*} This is a contradiction. Thus, we get the lemma. \end{proof} \begin{notation} For all $y_1,y_2\in M$, define \begin{align*} \overline{C}_f^{y_1}(y_2)=&\Big\{y_3\in M : \gamma_{y_2,y_3}(s)\in I_{y_1}\setminus\{y_1\} \text{ for almost all $s\in[0,d(y_2,y_3)]$, and}\\ &\qquad \qquad\qquad \qquad \int_{0}^{d(y_2,y_3)} |G_f^{y_1}|(\gamma_{y_2,y_3}(s))\,d s\leq L^{1/3}\Big\},\\ \overline{P}_f^{y_1}=&\{y_2\in M: \Vol(M\setminus \overline{C}_f^{y_1}(y_2))\leq L^{1/3}\Vol(M)\}. \end{align*} \end{notation} Let us complete the Gromov-Hausdorff approximation. \begin{Thm}\label{MT2} Take $f\in\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p+1}\}$ with $\|f\|_2^2=1/(n-p+1)$. Then, the map $\Phi_f\colon M\to S^{n-p}\times A_f$ is a $CL^{1/156n}$-Hausdorff approximation map. In particular, we have $d_{GH}(M, S^{n-p}\times A_f)\leq CL^{1/156n}$. \end{Thm} \begin{proof} Take arbitrary $y_1\in M$ and $\tilde{y}_1\in D_{f_{y_1}}(p_{y_1})\cap R_{f_{y_1}}\cap Q_{f_{y_1}}\cap Q_f$ with $d(y_1,\tilde{y}_1)\leq C\delta^{1/100n}$. By Lemmas \ref{p54c00}, \ref{p54n} and Corollary \ref{p54f0}, we have $ |G_f^{\tilde{y}_1}|(y_2)\leq CL $ for all $y\in D_f(\tilde{y_1})\cap D_{f_{y_1}}(\tilde{y}_1)$. Since $\Vol(M\setminus (D_f(\tilde{y_1})\cap D_{f_{y_1}}(\tilde{y}_1)))\leq C\delta^{1/100}\Vol(M)$ and $\|G_f^{\tilde{y}_1}\|_\infty\leq C$, we get $\|G_f^{\tilde{y}_1}\|_1\leq CL.$ Thus, by the segment inequality, we get $ \Vol(M\setminus \overline{P}^{\tilde{y}_1}_f)\leq CL^{1/3}. $ Take arbitrary $x,z\in M$. By the Bishop-Gromov inequality, there exist $\tilde{z}\in D_{f_{z}}(p_{z})\cap Q_{f_{z}} \cap R_{f_{z}}\cap Q_f$, $\tilde{x}\in D_f(p_f)\cap Q_f \cap R_f\cap\overline{P}_f^{\tilde{z}}$ and $\tilde{y}\in D_f(\tilde{x})\cap D_f(p_f)\cap Q_f\cap R_f\cap \overline{C}_f^{\tilde{z}}(\tilde{x})$ such that $d(z,\tilde{z})\leq C \delta^{1/100n}$, $d(x,\tilde{x})\leq CL^{1/3n}$ and $d(a_f(x),\tilde{y})\leq CL^{1/3n}$. Here, we used the estimate $\Vol(M\setminus \overline{P}^{\tilde{z}}_f)\leq CL^{1/3}$. Then, we get $$ \left| d(\tilde{z},\tilde{x})^2-d_S(\Psi(\tilde{z}),\Psi(\tilde{x}))^2- d(\tilde{z},\tilde{y})^2+d_S(\Psi(\tilde{z}),\Psi(\tilde{y}))^2 \right|\leq CL^{1/78n} $$ by Lemma \ref{p54i}. Thus, we get \begin{equation}\label{59a} \left| d(z,x)^2-d_S(\Psi(z),\Psi(x))^2- d(z,a_f(x))^2+d_S(\Psi(z),\Psi(a_f(x)))^2 \right|\leq CL^{1/78n} \end{equation} by Lemma \ref{p54c00}. Similarly, we have \begin{equation}\label{59b} \begin{split} &\left| d(a_f(x),z)^2-d_S(\Psi(a_f(x)),\Psi(z))^2- d(a_f(x),a_f(z))^2+ d_S(\Psi(a_f(x)),\Psi(a_f(z)))^2\right|\\ \leq &CL^{1/78n}. \end{split} \end{equation} Since we have $d_S(\Psi(a_f(x)),\Psi(a_f(z)))\leq C\delta^{1/2000n^2}$ by Lemma \ref{p54c0}, we get \begin{equation*} \left| d(z,x)^2-d_S(\Psi(z),\Psi(x))^2- d(a_f(x),a_f(z))^2\right|\leq CL^{1/78n}. \end{equation*} by (\ref{59a}) and (\ref{59b}). This gives \begin{equation*} \begin{split} &\left| d(z,x)-d(\Phi_f(z),\Phi_f(x))\right|\\ =&\left| d(z,x)-\left(d_S(\Psi(z),\Psi(x))^2+ d(a_f(x),a_f(z))^2\right)^{1/2}\right|\leq CL^{1/156n}. \end{split} \end{equation*} Combining this and Proposition \ref{p54l}, we get the theorem. \end{proof} By the above theorem, we get Main Theorem 2 except for the orientability, which is proved in subsection 4.7. \subsection{Further Inequalities} In this subsection, we assume that Assumption \ref{aspform} holds, and prepare two lemmas to prove the remaining part of main theorems. \begin{Lem}\label{pfua} For any $f\in \Span_{\mathbb{R}}\{f_1,\ldots,f_{k}\}$, we have $$ \left\|\sum_{i=1}^n e^i\otimes (\nabla_{e_i}d f+f e^i)\wedge \omega \right\|_2\leq C\delta^{1/8}\|f\|_2. $$ \end{Lem} \begin{proof} We have \begin{equation}\label{fua} \begin{split} &\left|\sum_{i=1}^n e^i\otimes (\nabla_{e_i}d f+f e^i)\wedge \omega\right|^2\\ =& |\nabla^2 f|^2|\omega|^2-\frac{1}{n-p}(\Delta f)^2|\omega|^2 +2\Delta f \left(\frac{1}{n-p}\Delta f-f\right)|\omega|^2\\ -&(n-p)\left(\left(\frac{\Delta f}{n-p}\right)^2-f^2\right)|\omega|^2 -\left|\sum_{i=1}^n e^i \otimes\iota(\nabla_{e_i}\nabla f)\omega\right|^2-2\sum_{i=1}^n f \langle\omega , e^i\wedge \iota(\nabla_{e_i}\nabla f)\omega\rangle. \end{split} \end{equation} By the assumption, we have \begin{align} \label{fub}\left\|\Delta f \left(\frac{1}{n-p}\Delta f-f\right)|\omega|^2\right\|_1\leq &C\delta^{1/2}\|f\|_2^2,\\ \label{fuc}\left\|\left(\left(\frac{\Delta f}{n-p}\right)^2-f^2\right)|\omega|^2\right\|_1\leq &C\delta^{1/2}\|f\|_2^2. \end{align} By Lemma \ref{p4d} (iv) and Lemma \ref{p5c} (ii), we have \begin{equation}\label{fud} \left\|\sum_{i=1}^n e^i \otimes\iota(\nabla_{e_i}\nabla f)\omega\right\|_2 \leq \|\nabla (\iota(\nabla f)\omega)\|_2+ C\delta^{1/2}\|f\|_2\leq C\delta^{1/4}\|f\|_2, \end{equation} and so \begin{equation}\label{fue} \left\|\sum_{i=1}^n f \langle\omega , e^i\wedge \iota(\nabla_{e_i}\nabla f)\omega\rangle\right\|_1\leq C\|f\|_2\left\|\sum_{i=1}^n e^i \otimes\iota(\nabla_{e_i}\nabla f)\omega\right\|_2 \leq C\delta^{1/4}\|f\|_2^2. \end{equation} By Lemma \ref{p5c}, (\ref{fua}), (\ref{fub}), (\ref{fuc}), (\ref{fud}) and (\ref{fue}), we get the lemma. \end{proof} \begin{Lem}\label{pfub} Define $G=G(f_1,\ldots,f_k)$ by \begin{equation*} \begin{split} G:=\Big\{x\in M: & |f_i^2+|\nabla f_i|^2-1|(x)\leq\delta^{1/1600n}\text{ for all $i=1,\ldots,k$, and}\\ &\left|\frac{1}{2}(f_i\pm f_j)^2+\frac{1}{2}|\nabla f_i\pm \nabla f_j|^2-1\right|(x)\leq \delta^{1/1600n}\text{ for all $i\neq j$} \Big\}. \end{split} \end{equation*} Then, we have the following properties. \begin{itemize} \item[(i)] We have $\Vol(M\setminus G)\leq C\delta^{1/1600n}\Vol(M)$. \item[(ii)] For all $x\in G$ and $i,j$ with $i\neq j$, we have $\left|f_i f_j+\langle\nabla f_i,\nabla f_j\rangle\right|(x)\leq\delta^{1/1600n}$. \end{itemize} \end{Lem} \begin{proof} By Proposition \ref{p53a} (iii), we have \begin{equation*} \begin{split} \|f_i^2+|\nabla f_i|^2-1\|_1\leq&C\delta^{1/800n},\\ \left\|\frac{1}{2}(f_i\pm f_j)^2+\frac{1}{2}|\nabla f_i\pm \nabla f_j|^2-1\right\|_1\leq &C\delta^{1/800n} \end{split} \end{equation*} for all $i\neq j$. Therefore, we get \begin{align*} &\Vol\left(\left\{x\in M: \left|f_i^2+|\nabla f_i|^2-1\right|(x)>\delta^{1/1600n}\right\}\right)\\ \leq& \delta^{-1/1600n}\int_M \left|f_i^2+|\nabla f_i|^2-1\right|\,d\mu_g\leq C\delta^{1/1600n}\Vol(M) \end{align*} for all $i$. Similarly, we have \begin{align*} \Vol\left(\left\{x\in M: \left|\frac{1}{2}(f_i\pm f_j)^2+\frac{1}{2}|\nabla f_i\pm\nabla f_j|^2-1\right|(x)>\delta^{1/1600n}\right\}\right)\leq C\delta^{1/1600n}\Vol(M) \end{align*} for all $i\neq j$. Thus, we get (i). For all $x\in G$ and $i,j$ with $i\neq j$, we have \begin{align*} &\left|f_i f_j+\langle\nabla f_i,\nabla f_j\rangle\right|(x)\\ =&\frac{1}{2}\left| \frac{1}{2}(f_i+f_j)^2+\frac{1}{2}|\nabla f_i+\nabla f_j|^2 -\frac{1}{2}(f_i-f_j)^2-\frac{1}{2}|\nabla f_i-\nabla f_j|^2\right|(x) \leq\delta^{1/1600n}. \end{align*} Thus, we get (ii). \end{proof} \subsection{Orientability} The goal of this subsection is to show the orientability of the manifold under the assumption of Main Theorem 2. \begin{Thm}\label{pora} If Assumption \ref{asu1} for $k=n-p+1$ and Assumption \ref{aspform} hold, then $M$ is orientable. \end{Thm} \begin{proof} To prove the theorem, we use the following claim: \begin{Clm}\label{porb} Define $$ \lambda_1(\Delta_{C,n}):=\inf \left\{\frac{\|\nabla \eta\|_2^2}{\|\eta\|_2^2}: \eta\in \Gamma(\bigwedge^n T^\ast M)\text{ with } \eta\neq 0\right\}. $$ If $ \lambda_1(\Delta_{C,n})< n(n-p-1)/(n-1) $ holds, then $M$ is orientable. \end{Clm} \begin{proof}[Proof of Claim \ref{porb}] Suppose that $M$ is not orientable. Take the two-sheeted oriented Riemannian covering $\pi\colon (\widetilde{M},\tilde{g})\to (M,g)$. Since we have $\Ric_{\tilde{g}}\geq (n-p-1)\tilde{g}$, we get $$ \lambda_1(\Delta_{C,n},g)\geq \lambda_2(\Delta_{C,n},\tilde{g})=\lambda_1(\tilde{g})\geq \frac{n}{n-1}(n-p-1) $$ by the Lichnerowicz estimate (note that $\lambda_1(\Delta_{C,n},\tilde{g})=\lambda_0(\tilde{g})=0$). This gives the claim. \end{proof} Put $$ V:=\sum_{i=1}^{n-p+1} (-1)^{i-1} f_i d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\in \Gamma(\bigwedge^n T^\ast M). $$ In the following, we show that $\|\nabla V\|_2^2/\|V\|_2^2< n(n-p+1)/(n-1)$. Define a vector bundle $E:=T^\ast M\oplus \mathbb{R}e$, where $\mathbb{R}e$ denotes the trivial bundle of rank $1$ with a nowhere vanishing section $e$. We consider an inner product $\langle\cdot,\cdot\rangle$ on $E$ defined by $ \langle \alpha+ f e,\beta +h e\rangle:=\langle\alpha,\beta\rangle+ fh $ for all $\alpha,\beta\in\Gamma(T^\ast M)$ and $f,h\in C^\infty(M)$. Put $$ S_i:=d f_i +f_i e\in \Gamma(E) $$ for each $i$, and $$ \alpha:=S_1\wedge\cdots \wedge S_{n-p+1}\in \Gamma(\bigwedge^{n-p+1} E). $$ Then, we have $\alpha\wedge \omega=e\wedge V$, and so \begin{equation}\label{ora} |\alpha\wedge \omega|=|V|. \end{equation} For each $k=1,\ldots,n-p+1$, we have \begin{equation*} \begin{split} &\Big\| \big\langle S_k\wedge\cdots \wedge S_{n-p+1}\wedge \omega, \left(\iota(S_{k-1})\cdots\iota(S_1)\alpha\right)\wedge \omega \big\rangle\\ &\qquad-\big\langle S_{k+1} \wedge\cdots \wedge S_{n-p+1}\wedge \omega, \left(\iota(S_k)\cdots\iota(S_1)\alpha\right)\wedge \omega \big\rangle \Big\|_1\\ =&\left\| \big\langle S_{k+1} \wedge\cdots \wedge S_{n-p+1}\wedge \omega, \left(\iota(S_{k-1})\cdots\iota(S_1)\alpha\right)\wedge \iota(d f_k)\omega \big\rangle \right\|_1\\ \leq& C\|\iota(d f_k)\omega\|_2\leq C\delta^{1/4} \end{split} \end{equation*} by Lemma \ref{p5c} (i). By induction, we get \begin{equation}\label{orb} \||\alpha\wedge \omega|^2-|\alpha|^2|\omega|^2\|_1\leq C\delta^{1/4}. \end{equation} In particular, we have \begin{equation}\label{orc} \left|\|\alpha\wedge \omega\|_2^2-\||\alpha|^2|\omega|^2\|_1\right| \leq C\delta^{1/4}. \end{equation} Since we have $ \left|\langle S_i(x), S_j(x)\rangle -\delta_{i j}\right|\leq \delta^{1/1600n} $ for all $x\in G=G(f_1,\ldots,f_{n-p+1})$ and $i,j$ by Lemma \ref{pfub} (ii), we get $ ||\alpha|^2(x)-1|\leq C\delta^{1/1600n} $ for all $x\in G$. Thus, we get \begin{equation}\label{ord} \begin{split} &\left| \frac{1}{\Vol(M)}\int_M(|\alpha|^2|\omega|^2-1) \,d\mu_g \right|\\ =&\Bigg| \frac{1}{\Vol(M)}\int_G(|\alpha|^2-1)|\omega|^2 \,d\mu_g\\ &\qquad+\frac{1}{\Vol(M)}\int_{M\setminus G}(|\alpha|^2-1)|\omega|^2 \,d\mu_g+\frac{1}{\Vol(M)}\int_M(|\omega|^2-1) \,d\mu_g \Bigg|\\ \leq &C\delta^{1/1600n} \end{split} \end{equation} by Lemmas \ref{p4c} and \ref{pfub} (i). By (\ref{ora}), (\ref{orc}) and (\ref{ord}), we get \begin{equation}\label{ore} |\|V\|_2^2-1|\leq C\delta^{1/1600n}. \end{equation} We next estimate $\|\nabla V\|_2^2$. We have \begin{equation*} \begin{split} &\nabla V\\ =& \sum_{i=1}^{n-p+1} (-1)^{i-1} d f_i\otimes d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\\ +&\sum_{j<i}\sum_{k=1}^n (-1)^{i-1}(-1)^{j-1} f_i e^k\otimes (\nabla_{e_k} d f_j)\wedge d f_1\wedge\cdots\wedge\widehat{d f_j} \wedge\cdots\wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\\ +&\sum_{i<j}\sum_{k=1}^n (-1)^{i-1}(-1)^{j} f_i e^k\otimes (\nabla_{e_k} d f_j)\wedge d f_1\wedge\cdots\wedge\widehat{d f_i} \wedge\cdots\wedge\widehat{d f_j}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\\ +&\sum_{i=1}^{n-p+1} \sum_{k=1}^n (-1)^{i-1} f_i e^k \otimes d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \nabla_{e_k}\omega. \end{split} \end{equation*} Thus, we get \begin{equation}\label{orf} \begin{split} &\left\| \nabla V - \sum_{i=1}^{n-p+1} (-1)^{i-1} d f_i\otimes d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega \right\|_2\\ \leq& \Bigg\|\sum_{j<i}\sum_{k=1}^n (-1)^{i-1}(-1)^{j-1} f_i f_j e^k\otimes e^k\wedge d f_1\wedge\cdots\wedge\widehat{d f_j} \wedge\cdots\wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\\ &+\sum_{i<j}\sum_{k=1}^n (-1)^{i-1}(-1)^{j} f_i f_j e^k\otimes e^k\wedge d f_1\wedge\cdots\wedge\widehat{d f_i} \wedge\cdots\wedge\widehat{d f_j}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\Bigg\|_2\\ &+C\sum_{i=1}^{n-p+1}\left\|\sum_{k=1}^n e^k\otimes (\nabla_{e_k}d f_i+f_i e^k)\wedge\omega\right\|_2 + C\|\nabla\omega\|_2\\ \leq &C\delta^{1/8} \end{split} \end{equation} by Lemma \ref{pfua}. Similarly to (\ref{orb}), we have \begin{equation}\label{org} \begin{split} &\Bigg\|\left|\sum_{i=1}^{n-p+1} (-1)^{i-1} d f_i\otimes d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\wedge \omega\right|^2\\ &\qquad-\left|\sum_{i=1}^{n-p+1} (-1)^{i-1} d f_i\otimes d f_1\wedge\cdots \wedge \widehat{d f_i}\wedge\cdots \wedge d f_{n-p+1}\right|^2|\omega|^2\Bigg\|_1\\ &\leq C\delta^{1/4}. \end{split} \end{equation} Since we have $ d f_1\wedge\cdots\wedge d f_{n-p+1}\wedge\omega=0, $ we get \begin{equation}\label{orh} \begin{split} &\| |d f_1\wedge\cdots\wedge d f_{n-p+1}|^2|\omega|^2 \|_1\\ =& \||d f_1\wedge\cdots\wedge d f_{n-p+1}|^2|\omega|^2- |d f_1\wedge\cdots\wedge d f_{n-p+1}\wedge\omega|^2 \|_1 \leq C\delta^{1/4} \end{split} \end{equation} similarly to (\ref{orb}). By (\ref{q1k}), we get \begin{equation}\label{ori} \begin{split} &\left| \sum_{i=1}^{n-p+1}(-1)^{i-1}d f_i\otimes d f_1\wedge\cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p+1} \right|^2\\ =&(n-p+1) |d f_1\wedge \cdots\wedge d f_{n-p+1}|^2. \end{split} \end{equation} By (\ref{orh}) and (\ref{ori}), we get \begin{equation}\label{orj} \left\| \left|\sum_{i=1}^{n-p+1}(-1)^{i-1}d f_i\otimes d f_1\wedge\cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p+1}\right|^2|\omega|^2 \right\|_1\leq C\delta^{1/4}. \end{equation} By (\ref{org}) and (\ref{orj}), we have \begin{equation}\label{ork} \left\| \sum_{i=1}^{n-p+1}(-1)^{i-1}d f_i\otimes d f_1\wedge\cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p+1}\wedge\omega \right\|_2^2 \leq C\delta^{1/4}. \end{equation} By (\ref{orf}) and (\ref{ork}), we get \begin{equation}\label{orl} \|\nabla V\|_2\leq C\delta^{1/8}. \end{equation} By (\ref{ore}) and (\ref{orl}), we get $ \lambda_1(\Delta_{C,n})\leq C\delta^{1/4}, $ and so we get the theorem by Claim \ref{porb}. \end{proof} Combining Theorems \ref{MT2} and \ref{pora}, we get Main Theorem 2. \subsection{Almost Parallel $(n-p)$-form II} In this subsection, we show that the assumption ``$\lambda_{n-p}(g)$ is close to $n-p$'' implies the condition ``$\lambda_{n-p+1}(g)$ is close to $n-p$'' under the assumption $\lambda_1(\Delta_{C,n-p})\leq \delta$. \begin{Lem}\label{pala} Suppose that Assumption \ref{asu1} for $k=n-p$ and Assumption \ref{asn-pform} hold. Put $ F:= \langle d f_1\wedge\ldots\wedge d f_{n-p}, \xi \rangle\in C^\infty(M). $ Then, we have $$ \left|\|F\|_2^2-\frac{1}{n-p+1}\right|\leq C\delta^{1/1600n},\quad \left|\|\nabla F\|_2^2-\frac{n-p}{n-p+1}\right|\leq C\delta^{1/1600n} $$ and $$ \left|\frac{1}{\Vol(M)}\int_M f_i F\,d\mu_g\right|\leq C\delta^{1/2} $$ for all $i=1,\ldots, n-p$. \end{Lem} \begin{proof} If $M$ is not orientable, we take the two-sheeted oriented Riemannian covering $\pi\colon (\widetilde{M},\tilde{g})\to (M,g)$, and put $ \widetilde{F}:=F\circ \pi$ and $\tilde{f}_i:=f_i\circ \pi. $ Then, we have $ \|F\|_2=\|\widetilde{F}\|_2$, $\|\nabla F\|_2=\|\nabla \widetilde{F}\|_2,$ $$ \frac{1}{\Vol(\widetilde{M})}\int_{\widetilde{M}} \tilde{f}_i \widetilde{F} \,d\mu_{\tilde{g}}= \frac{1}{\Vol(M)}\int_M f_i F \,d\mu_g $$ and $ \widetilde{F}=\langle d \tilde{f}_1\wedge\ldots\wedge d \tilde{f}_{n-p}, \pi^\ast \xi\rangle. $ Thus, it is enough to consider the case when $M$ is orientable. In the following, we assume that $M$ is orientable, and we fix an orientation of $M$. Put $ \omega:=\ast \xi\in \Gamma(\bigwedge^p T^\ast M). $ Let $V_g\in \Gamma(\bigwedge^n T^\ast M)$ be the volume form of $(M,g)$. Then, we have \begin{equation}\label{ala} F V_g= d f_1\wedge\cdots \wedge d f_{n-p}\wedge \omega. \end{equation} Define a vector bundle $E:=T^\ast M\oplus \mathbb{R}e$ and an inner product $\langle,\rangle$ on it as in the proof of Theorem \ref{pora}. Put $$ S_i:=d f_i +f_i e\in \Gamma(E) $$ for each $i$, and $$ \beta:=S_1\wedge\cdots \wedge S_{n-p}\in \Gamma(\bigwedge^{n-p} E). $$ Since we have $|F|=|F V_g|$, we get $ \||F|^2-|d f_1\wedge\cdots \wedge d f_{n-p} |^2|\omega|^2\|_1\leq C\delta^{1/4} $ similarly to (\ref{orb}) by (\ref{ala}), and so \begin{equation}\label{alb} \left|\|F\|_2^2-\left\||d f_1\wedge\cdots \wedge d f_{n-p} |^2|\omega|^2\right\|_1\right|\leq C\delta^{1/4} \end{equation} By Lemma \ref{pfua} and (\ref{ala}), we have \begin{equation*} \left\| \nabla (F V_g)+\sum_{i=1}^{n-p} \sum_{k=1}^n(-1)^{i-1} f_i e^k\otimes e^k\wedge d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega \right\|_2\leq C\delta^{1/8}. \end{equation*} Since $|\nabla(F V_g)|=|\nabla F|$, we get \begin{equation}\label{alc} \left|\|\nabla F\|_2^2-\left\|\left|\sum_{i=1}^{n-p} \sum_{k=1}^n(-1)^{i-1} f_i e^k\otimes e^k\wedge d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega\right|^2\right\|_1\right|\leq C\delta^{1/8}. \end{equation} We have \begin{equation}\label{ald} \begin{split} &\left|\sum_{i=1}^{n-p} \sum_{k=1}^n(-1)^{i-1} f_i e^k\otimes e^k\wedge d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega \right|^2\\ =&\left|\sum_{i=1}^{n-p} (-1)^{i-1} f_i d f_1\wedge \cdots \wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega\right|^2. \end{split} \end{equation} Similarly to (\ref{orb}), we have \begin{equation*} \begin{split} &\Bigg\|\left|\sum_{i=1}^{n-p} (-1)^{i-1} f_i d f_1\wedge \cdots \wedge\widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega\right|^2\\ &\qquad -\left|\sum_{i=1}^{n-p} (-1)^{i-1} f_i d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\right|^2|\omega|^2\Bigg\|_1\leq C\delta^{1/4}. \end{split} \end{equation*} Since we have \begin{equation*} \iota(e)\beta=\sum_{i=1}^{n-p} (-1)^{i-1} f_i d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}, \end{equation*} we get \begin{equation}\label{alf} \left\|\left|\sum_{i=1}^{n-p} (-1)^{i-1} f_i d f_1\wedge \cdots\wedge \widehat{d f_i}\wedge \cdots\wedge d f_{n-p}\wedge \omega\right|^2 -|\iota(e)\beta|^2|\omega|^2\right\|_1\leq C\delta^{1/4}. \end{equation} By (\ref{alc}), (\ref{ald}) and (\ref{alf}), we get \begin{equation}\label{alf1} \left|\|\nabla F\|_2^2-\left\||\iota(e)\beta|^2|\omega|^2\right\|_1\right|\leq C\delta^{1/8}. \end{equation} We have \begin{equation}\label{alg} |\beta|^2=|d f_1\wedge\cdots\wedge d f_{n-p}|^2+|\iota(e)\beta|^2. \end{equation} We calculate $\sum_{k=1}^n\left|e^k\wedge \beta\right|^2$ in two ways. We have \begin{equation}\label{alh} \begin{split} \sum_{k=1}^n|e^k\wedge \beta|^2=&(p+1)|\beta|^2-|e\wedge\beta|^2\\ =&(p+1)|\beta|^2-|d f_1\wedge\cdots\wedge d f_{n-p}|^2= p|\beta|^2+|\iota(e)\beta|^2 \end{split} \end{equation} by (\ref{alg}). For all $\eta\in \Gamma(T^\ast M)$, we have \begin{align*} &|\eta\wedge\beta|^2\\ =&|\eta|^2|\beta|^2-\langle\iota(\eta)\beta,\iota(\eta)\beta\rangle\\ =&|\eta|^2|\beta|^2-\sum_{i,j=1}^{n-p}(-1)^{i+j}\langle \eta, d f_i\rangle\langle\eta, d f_j\rangle \langle S_1\wedge\cdots\wedge \widehat{S_i}\wedge\cdots \wedge S_{n-p},S_1\wedge\cdots\wedge \widehat{S_j}\wedge\cdots \wedge S_{n-p} \rangle, \end{align*} and so we get \begin{equation}\label{ali} \begin{split} &\sum_{k=1}^n|e^k\wedge \beta|^2\\ =&n|\beta|^2-\sum_{i,j=1}^{n-p}(-1)^{i+j}\langle d f_i,d f_j\rangle \langle S_1\wedge\cdots\wedge \widehat{S_i}\wedge\cdots \wedge S_{n-p},S_1\wedge\cdots\wedge \widehat{S_j}\wedge\cdots \wedge S_{n-p}\rangle. \end{split} \end{equation} By (\ref{alh}) and (\ref{ali}), we get \begin{equation}\label{alj} \begin{split} &|\iota(e)\beta|^2\\ =&(n-p)|\beta|^2-\sum_{i,j=1}^{n-p}(-1)^{i+j}\langle d f_i,d f_j\rangle \langle S_1\wedge\cdots\wedge \widehat{S_i}\wedge\cdots \wedge S_{n-p},S_1\wedge\cdots\wedge \widehat{S_j}\wedge\cdots \wedge S_{n-p}\rangle \end{split} \end{equation} Since we have $|\langle S_i,S_j\rangle(x)-\delta_{i j}|\leq C\delta^{1/1600n}$ for all $x\in G=G(f_1,\ldots, f_{n-p})$ by Lemma \ref{pfub} (ii), we have \begin{equation}\label{alk} \begin{split} &\Bigg\|\sum_{i=1}^{n-p}|d f_i|^2\\ &-\sum_{i,j=1}^{n-p}(-1)^{i+j}\langle d f_i,d f_j\rangle \langle S_1\wedge\cdots\wedge \widehat{S_i}\wedge\cdots \wedge S_{n-p},S_1\wedge\cdots\wedge \widehat{S_j}\wedge\cdots \wedge S_{n-p}\rangle|\omega|^2 \Bigg\|_1 \leq C\delta^{1/1600n} \end{split} \end{equation} and \begin{equation}\label{all} \left|\left\||\beta|^2|\omega|^2\right\|_1-1\right|\leq C\delta^{1/1600n} \end{equation} by Lemmas \ref{p4c} and \ref{pfub} (i). By the assumption, we have \begin{equation}\label{alm} \left|\sum_{i=1}^{n-p}\|d f_i\|_2^2-\frac{(n-p)^2}{n-p+1}\right|\leq C\delta^{1/2}. \end{equation} By (\ref{alj}), (\ref{alk}), (\ref{all}) and (\ref{alm}), we get \begin{equation}\label{aln} \left|\left\||\iota(e)\beta|^2|\omega|^2\right\|_1-\frac{n-p}{n-p+1}\right|\leq C\delta^{1/1600n}, \end{equation} and so \begin{equation}\label{alo} \left|\left\||d f_1\wedge\cdots\wedge d f_{n-p}|^2|\omega|^2\right\|_1-\frac{1}{n-p+1}\right|\leq C\delta^{1/1600n} \end{equation} by (\ref{alg}) and (\ref{all}). By (\ref{alb}) and (\ref{alo}), we get $$ \left|\|F\|_2^2-\frac{1}{n-p+1}\right|\leq C\delta^{1/1600n}. $$ By (\ref{alf1}) and (\ref{aln}), we get $$ \left|\|\nabla F\|_2^2- \frac{n-p}{n-p+1}\right| \leq C\delta^{1/1600n}. $$ Let us show the remaining assertion. Since we have \begin{align*} f_i F V_g=&\frac{1}{2}(-1)^{i-1} d \left(f_i^2 d f_1\wedge\cdots\wedge\widehat{d f_i}\wedge \cdots \wedge d f_{n-p}\wedge\omega\right)\\ -&\frac{1}{2}(-1)^{i-1} (-1)^{n-p-1}f_i^2 d f_1\wedge\cdots\wedge\widehat{d f_i}\wedge\cdots \wedge d f_{n-p}\wedge d \omega, \end{align*} we get $$ \left|\frac{1}{\Vol(M)}\int_M f_i F\,d\mu_g\right|\leq C\|\nabla \omega\|_2 \leq C\delta^{1/2} $$ by the Stokes theorem. \end{proof} By applying the min-max principle \begin{align*} &\lambda_{n-p+1}(g)\\ =&\inf\left\{\sup_{f\in V\setminus\{0\}}\frac{\|\nabla f\|_2^2}{\|f\|_2^2}: V\text{ is an $(n-p+1)$-dimensional subspace of } C^\infty (M) \right\} \end{align*} to the subspace $\Span_{\mathbb{R}}\{f_1,\ldots, f_{n-p}, F\}$, we immediately get the following corollary: \begin{Cor}\label{palb} If Assumption \ref{asu1} for $k=n-p$ and Assumption \ref{asn-pform} hold, then we have $ \lambda_{n-p+1}(g)\leq n-p+C\delta^{1/1600n}. $ \end{Cor} Combining Theorem \ref{MT2} and Corollary \ref{palb}, we get Main Theorem 4. Finally, we investigate the Gromov-Hausdorff limit of the sequence of the Riemannian manifolds that satisfy our pinching condition. \begin{Thm} Take $n\geq 5$ and $2\leq p < n/2$. Let $\{(M_i,g_i)\}_{i\in\mathbb{N}}$ be a sequence of $n$-dimensional closed Riemannian manifolds with $\Ric_{g_i}\geq (n-p-1)g_i$ that satisfies one of the following: \begin{itemize} \item[(i)] $\lim_{i\to\infty}\lambda_{n-p+1}(g_i)=n-p$ and $\lim_{i\to \infty}\lambda_1(\Delta_{C,p},g_i)=0$, \item[(ii)] $\lim_{i\to\infty}\lambda_{n-p}(g_i)=n-p$ and $\lim_{i\to \infty}\lambda_1(\Delta_{C,n-p},g_i)=0$. \end{itemize} If $\{(M_i,g_i)\}_{i\in\mathbb{N}}$ converges to a geodesic space $X$, then there exists a geodesic space $Y$ such that $X$ is isometric to $S^{n-p}\times Y$. \end{Thm} \begin{proof} By Main Theorems 2 and 4, we get that there exist a sequence of positive real numbers $\{\epsilon_i\}$ and compact metric spaces $\{Y_i\}$ such that $\lim_{i\to \infty}\epsilon_i=0$ and $d_{GH}(M_i,S^{n-p}\times Y_i)\leq \epsilon_i$. Then, $\{S^{n-p}\times Y_i\}$ converges to $X$ in the Gromov-Hausdorff topology, and so $\{Y_i\}$ is pre-compact in the Gromov-Hausdorff topology by \cite[Theorem 11.1.10]{Pe3}. Thus, there exists a subsequence that converges to some compact metric space $Y$. Therefore, we get that $X$ is isometric to $S^{n-p}\times Y$. Since $X$ is a geodesic space, $Y$ is also a geodesic space. \end{proof}
{ "timestamp": "2021-01-07T02:09:44", "yymm": "1904", "arxiv_id": "1904.06533", "language": "en", "url": "https://arxiv.org/abs/1904.06533" }
\section{Introduction} NGC\,4151 is well-known Seyfert 1 galaxy, one of the nearest galaxies with active nucleus. NGC\,4151 is one of the rare objects for which there exist two independent dynamical measurements for the mass of the central black hole. To first order, the black hole mass derived by the stellar dynamical modeling depends linearly on the assumed distance to the galaxy \citep{onken2014}. However, the actual distance to the galaxy is rather uncertain. The Extragalactic Distance Database \citep{tully2009} presents distance measurements based on the Tully-Fisher relation: the individual estimate for NGC\,4151 is 3.9$\pm$0.4 Mpc, and the group-average distance is 11.2$\pm$1.1 Mpc. The reability of these distance estimates are doubtful, as discussed by \citet{onken2014}. The methods based on the reprocessing of the emission of the active nucleus provided much larger distances: 19 Mpc \citep{cackett2007} and 29 Mpc \citep{yoshii2014}. \citet{honig2014} applied a geometric method, measuring the size of region of hot dust emission as determined from time-delays and infrared interferometry, which yielded 19.0$\pm$2.5 Mpc. The discovery of a type II-P supernova (SN) 2018aoq in NGC\,4151 presents a new possibility to obtain an independent estimate of the distance to the galaxy. The optical transient Kait-18P=2018aoq was discovered on 2018-04-01.4316 by the Lick Observatory Supernova Search at the unfiltered magnitude of 15.3 at a distance of 73\arcsec\ from the center of NGC\,4151. Spectroscopic observations with the 1.5-m Kanata telescope classified the transient as a Type II supernova\footnotemark. \footnotetext{https://wis-tns.weizmann.ac.il/search} Observations of type II-P SNe can be used to determine distances to their host galaxies using the Expanding Photosphere Method (EPM), which was first developed by \citet{Kirshner1974}. The method is based on measuring the angular radius of the photosphere from photometric data and comparing the resulting expansion rate to the velocity extracted from the spectral data. The EPM provides estimates of distance independent of extragalactic distance ladder. The method requires high-quality spectroscopic and photometric monitoring of SNe and was applied mostly to nearby objects \citep[e.g.,][]{Hamuy2001, Takats2006, Jones2009, Bose2014}, although recently it became possible to perform the EPM on SNe at cosmologically significant redshifts \citep[e.g.,][]{Gall2016, Gall2018}. The other method for distance determinations using SNe\,II-P is the Standardized Candle Method (SCM) \citep{Hamuy2002}, based on a correlation between the luminosity and the expansion velocity of SNe during the plateau phase. This method relies on the local distance calibrators and yields distances that are in reasonable agreement with the EPM \citep[e.g.,][]{Nugent2006, Poznanski2009, Olivares2010, Gall2018}. \section {Observations} Photometric $UBVRI$ CCD observations of SN\,2018aoq were carried out at the 60-cm and 50-cm telescopes of Crimean Observatory of Sternberg Astronomical Institute (SAI), the 70-cm and 20-cm telescopes of Moscow Observatory of SAI, the 1-m telescope of Institute of Astronomy of Russian Academy of Science (INASAN) at Simeiz Observatory, the 60-cm telescope of Star\'a Lesn\'a Observatory of the Astronomical Institute of Slovak Academy of Science, and the 60-cm telescope of Shamakhy Astrophysical Observatory. The standard image reductions and photometry were made using the {\sc IRAF}\footnotemark . \footnotetext{{\sc IRAF} is distributed by the National Optical Astronomy Observatory, which is operated by AURA under cooperative agreement with the National Science Foundation.} The magnitudes of the SN were derived by a PSF-fitting relatively to a sequence of local standard stars, which were calibrated by \citet{lyutyi1973}, \citet{doroshenko2005}, and \citet{roberts2012}. The photometry was transformed to standard Johnson-Cousins $UBVRI$ magnitudes by means of instrumental colour-terms. The surface brightness of the host galaxy at the location of the SN is not very high, nevertheless we checked if the galaxy background affects the photometry. We used the images obtained before SN outburst at the Shamakhy Observatory for galaxy subtraction. We found that for most of the images the effect of galaxy background does not exceed the errors of magnitudes, but for the 50-cm telescope of SAI it may amount to 0.05-0.1 mag. We applied galaxy subtraction for all images obtained with this telescope. The magnitudes of standard stars are presented in Table~ \ref{tab:localstand}, the photometric data are presented in Table~\ref{tab:photometry}. Prediscovery observations were reported by \citet{nazarov2018}. We carried out photometry on their images, using our local standard stars and applying galaxy subtraction, and obtained new magnitude estimates, which are also reported in Table~\ref{tab:photometry}. The light curves are shown in Fig.~\ref{fig:ligcur} \begin{figure} \includegraphics[width=\columnwidth]{sn18aoq_lc.pdf} \caption{The light curves of SN\,2018aoq. The phase is in reference to the explosion date JD\,2458208. The error bars are shown only if they exceed the size of a symbol} \label{fig:ligcur} \end{figure} The shape of the light curves is typical for SNe II-P. The first observations were obtained on the rising part of the light curves, and we can determine the epoch when the SN reached the plateau phase as JD\,2458215$\pm$1 (April 6). \citet{yamanaka2018} obtained images of NGC\,4151 on JD\,2458209.0 (March 31.5) and derived an upper limit of 17.5 mag in the $R$-band, \citet{ONeil2019} reported that the SN was fainter than 18.89 mag in the 'orange' ATLAS filter on JD\,2458206.97 (March 29). We used a polynomial fit to the $R$-band magnitudes on the rise and found that the best estimate for the epoch of explosion is JD\,2458208$\pm$1, 7 days before start of the plateau. This value of rise time is in agreement with the average rise time for SNe II-P reported by \citet{Gall2015}. The blue colour of SN\,2018aoq at maximum and the absence of detectable interstellar lines in the spectra allows to conclude that the absorption in the host galaxy was negligible. The galactic extinction is small $E(B-V)_{gal}=0.02$ mag \citep{schlafly2011}. \citet{ONeil2019} compared the colour curves of SN\,2018aoq to those of type II-P SNe for which the extinction is well-known, and derived the total extinction for SN\,2018aoq $E(B-V)_{tot}=0.04$ mag, only slightly larger than $E(B-V)_{gal}$. We used $E(B-V)_{tot}=0.04$ for all further calculations. Spectroscopic observations were obtained at the 2-m telescope of Shamakhy Astrophysical Observatory. The modified Universal Astronomical Grating Spectrograph provided the wavelength range of 3900 -- 7000 \AA\ with a dispersion of 115\AA\,mm$^{-1}$, which corresponds to 4.1 or 8.2 \AA\,pixel$^{-1}$ for different CCD binning. The journal of spectroscopic observations is presented in Table~\ref{tab:spectra}, the spectra are shown in Fig.~\ref{fig:sp}. \begin{figure} \includegraphics[width=\columnwidth]{sn18aoq_sp.pdf} \caption{The spectra of SN 2018aoq. The ages are relative to the date of explosion (JD\,2458208). The vertical dashed lines indicate absorption minima of FeII lines used for the EPM. These lines are not detected in the first spectrum, which was not used for the EPM} \label{fig:sp} \end{figure} We continue the observations of SN\,2018aoq, the complete set of data and its analysis will be presented in a separate paper. \section{The EPM distance} The Expanding Photosphere Method (EPM) \citep{Kirshner1974} determines a distance $D$ to the SN from the relation $\theta = R/D$, where $\theta$ is the angular radius of photosphere, $R$ is its linear radius. The method can be applied if the ejecta is spherically symmetric, the envelope undergoes free expansion, so that the velocity of matter $v$ and the radial distance $r$ are connected by $v = r/(t-t_0)$, where $t_0$ is the zero-point time, which might be offset from the true moment of explosion. The photospheric flux of SN is described by a modified Planck spectrum $F_\nu(R) = \zeta^2 \pi B_\nu(T_{\rm col})$, where $\zeta$ is the correction factor, $T_{\rm col}$ is the colour temperature, $B_\nu(T_{\rm col})$ is the Planck function. The description of the EPM is presented in a number of papers, \citep[e.g.,][]{Hamuy2001, Takats2006, Jones2009, Gall2018}. We applied the EPM following the prescriptions of \citet{Hamuy2001}. The correction factor $\zeta$ cannot be determined from observations. The empirical relations between $\zeta$ and $T_{\rm col}$ were established by \citet{Eastman1996} and \citet{Dessart2005}. We used the relation by \citet{Dessart2005}, which is confirmed by our research (Baklanov, in prep.) and by \citet{Vogl2018}. We used three sets of filter combinations to derive the temperature and angular radius of SN photosphere. The errors in quantities $\theta$ and $T_{\rm col}$ were estimated using Monte Carlo technique. Samples of data points were drawn from normal distributions of uncertainty in the photometric fluxes. The velocity of matter at the photosphere $v_{\rm ph}$ can be measured by the blueshift of weak absorption lines, the lines of FeII $\lambda$5018\AA\ and $\lambda$5169\AA\ are used more often \citep{Takats2012}. The observed spectra were corrected for the redshift of the galaxy $z=0.00332$\footnotemark \footnotetext{https://ned.ipac.caltech.edu/} and continuum subtracted, using polynome fitting with the {\sc SNID} package \citep{Blondin2007}. The spectra were smoothed with a Savitzky-Golay filter \citep{Savitzky1964} and wavelengths of absorption minima were determined. The uncertainties of velocity measurements were estimated to be in the range of 4--6\%, depending on the spectral resolution and photon statistics of the detector. The computations were carried out for three sets of filter combinations: $BVI$, $BV$, and $VI$, for the velocities derived from the lines FeII$\lambda$5018, $\lambda$5169 and for the average velocities. Table ~\ref{tab:epm} presents the basic EPM quantities. $T_{\rm col}$, $\zeta$ and $\theta$ are given only for the $BVI$ filter set, for other sets they are similar. \begin{table*} \begin{center} \caption{The EPM quantities derived for SN\,2018aoq. The uncertainties are in parentheses.} \label{tab:epm} \begin{tabular}{ccccccc} \hline JD &Phase,& $v$(FeII$\lambda$5018) & $v$(FeII$\lambda$5169) & $T_{\rm col}(BVI) $ & $\zeta$ & $\theta$ \\ 2458200+ &days& km\,s$^{-1}$ & km\,s$^{-1}$ & K & & $10^{-11}$ rad \\ \hline 34.36 &26.4& 3529 (140) & 3893 (154) & 7022 (225) & 0.662 & 1.62 (0.11) \\ 35.33 &27.3& 3529 (234) & 3893 (231) & 6968 (212) & 0.667 & 1.63 (0.10) \\ 46.35 &38.4& 3125 (185) & 3083 (183) & 6245 (181) & 0.747 & 1.80 (0.12) \\ 52.45 &45.5& 2707 (108) & 2680 (106) & 5975 (184) & 0.786 & 1.88 (0.14) \\ 56.40 &48.4& 2706 (108) & 2666 (106) & 5703 (187) & 0.833 & 1.95 (0.17) \\ 79.39 &71.4& 1903 (75) & 1862 (90) & 5365 (135) & 0.905 & 1.94 (0.13) \\ \hline \end{tabular} \end{center} \end{table*} The ratio $\theta/v$ as a function of time for the filter sets $BVI$, $BV$, and $VI$ is presented in Fig.~\ref{fig:epm}. \begin{figure} \includegraphics[width=\columnwidth]{sn18aoq_distfit.pdf} \caption{The ratio $\theta/v$ as a function of time for three filter sets, for average velocity. } \label{fig:epm} \end{figure} We determined $t_0$ and $D$ using the Markov Chain Monte Carlo method in the {\sc EMCEE} software package \citep{Foreman-Mackey2012}. The results are presented in Table~\ref{tab:dist}. \begin{table} \begin{center} \caption{The EPM distances for SN\,2018aoq. The uncertainties are in parentheses.} \label{tab:dist} \begin{tabular}{cccc} \hline Filter set & FeII line & $D$, Mpc & $t_0$, JD\,2458000+ \\ \hline $BVI$ & 5018 & 21.4 (2.6) & 200.3 (5.5) \\ $BVI$ & 5169 & 19.4 (1.7) & 205.8 (3.2) \\ $BVI$ & Average & 20.2 (2.1) & 203.5 (4.3) \\ $BV$ & 5018 & 19.9 (3.9) & 203.3 (7.7) \\ $BV$ & 5169 & 19.7 (3.6) & 204.3 (6.5) \\ $BV$ & Average & 20.0 (3.9) & 203.3 (7.6) \\ $VI$ & 5018 & 21.0 (3.3) & 199.2 (7.0) \\ $VI$ & 5169 & 19.1 (2.2) & 204.6 (4.4) \\ $VI$ & Average & 19.8 (2.7) & 202.4 (5.6) \\ \hline \end{tabular} \end{center} \end{table} \section{The SCM distance} The Standardized Candle Method (SCM) \citep{Hamuy2002} is based on a correlation between the absolute brightness of SNe II-P and the expansion velocities derived from the minimum of the FeII P-Cygni feature observed during the plateau phase. We used our estimates of expansion velocity from the shift of FeII$\lambda5169$ line and photometry in the $VI$ bands and applied the SCM using the calibration by \citet{Polshaw2015}, based on the Cepheid distances to well-observed SNe II-P. We obtain distance estimates $D_V=16.7\pm1.4$ Mpc, $D_I=16.5\pm1.3$ Mpc, and the average $D=16.6\pm1.1$ Mpc. \section{Discussion} All distance estimates presented in Table~\ref{tab:dist} are consistent with each other, and we may accept the average value $D=20.0\pm1.6$ Mpc as the EPM distance for SN\,2018aoq and NGC\,4151, which is in good agreement with the result of \citet{honig2014}. The estimates of $t_0$ are earlier than the explosion epoch derived from photometry, but for most of the data the difference does not exceed the uncertainties. We should note that the epoch $t_0$ from the EPM fit may be offset from the explosion date \citep{Takats2006}. Recently \citet{ONeil2019} utilised the SCM method for SN\,2018aoq as calibrated by \citet{Polshaw2015} to obtain a distance of $18.2\pm1.2$ Mpc. Our SCM distance is about 9\% shorter than the result of \citet{ONeil2019}, because of small differences in the observational data. The expansion velocity of SN 2018aoq is low, about 2600 km s$^{-1}$ at 50 days past explosion. The luminosity at the plateau is $M_I=-16.4$ mag for the EPM distance, and $M_I=-16.0$ mag for the SCM distance. SN\,2018aoq appears to be an intermediate object between subluminous and normal SNe II-P, as was suggested by \citet{ONeil2019}. The distance measurements by the EPM and SCM may have systematic errors, for the EPM they result from the adopted values of dilution factor $\zeta$, which may also be a function of chemical composition and density structure of the envelope. Other reasons for uncertainty are the difference of photospheric velocity from that derived from the FeII lines and absence of spherical symmetry of the ejecta. The major sources of errors for the SCM are the calibration process and the internal diversity of the properties of SNe II-P. In the case of SN\,2018aoq the SCM distance is about 18\% shorter than the EPM distance, but both values are consistent with the most reliable distance estimate for the host galaxy $D=19.0\pm2.5$ Mpc, based on geometric technique \citep{honig2014}. We may conclude that these results confirm the applicability of SNe II-P for distance measurements. \section*{Acknowledgements} The work of D.Tsvetkov and P.Baklanov was partly supported by the Russian Science Foundation Grant No. 16-12-10519. The work of S.Shugarov was partially supported by Grants VEGA 2/0008/17 and APVV-15-0458. The work of I.Volkov was supported by the scholarship of the Slovak Academic Information Agency (SAIA), by the Russian Science Foundation Grant No. 14-12-00146 and Russian Foundation for Basic Research Grant No. 18-502-12025. The work on photospheric velocity determination was done by M.Sh.Potashov and was supported by the Russian Science Foundation Grant No. 19-12-00229. We thank the anonymous referee for constructive suggestions which helped to improve the paper. \bibliographystyle{mnras}
{ "timestamp": "2019-04-16T02:12:18", "yymm": "1904", "arxiv_id": "1904.06586", "language": "en", "url": "https://arxiv.org/abs/1904.06586" }
\section{Introduction} \par Creating an experimental platform which hosts Majorana bound states (MBSs) in a condensed matter system is a goal that has received great attention recently.\cite{Alicea2012RPP,Beenakker2013ARCMP} Due to robust topological protection, the MBS is a promising qubit candidate for quantum computation.\cite{Kitaev2006AP} One of the platforms proposed to realize the MBS is a topological insulator / superconductor (TI/SC) bilayer system.\cite{LiangFu2008PRL} With the induced chiral $p$-wave superconductivity in the topological surface states (TSS), an MBS has been predicted to exist in its vortex core.\cite{Read2000PRB,Ivanov2001PRL,Stern2004PRB,Stone2006PRB} Therefore, it is important for the physics community to establish and understand the properties of TI/SC bilayer systems. \par There have been a number of studies on the Bi-based TI (Bi$_2$Se$_3$, Bi$_2$Te$_3$, etc) /SC systems through point contact spectroscopy (PCS)\cite{WenqingDai2017SciRep}, ARPES\cite{MWang2012Science,SuYangXu2014NatPhys}, and STM\cite{JinPengXu2014PRL,JinPengXu2015PRL,HaoHuaSun2016PRL} measurements. PCS and STM probe the magnitude of the superconducting order parameter induced in the top surface of the TI with a probing depth range limited to the mean free path or coherence length, and cannot be applied to the case when an insulating bulk region is present. ARPES studies the angle-resolved magnitude of the induced order parameter from the first few atomic layers of the top surface of the TI. \par In contrast, a microwave Meissner screening study investigates the high frequency electromagnetic field response. The microwave field propagates through an insulating layer and penetrates inside the superconducting system to the scale of the penetration depth, which is comparable to the thickness of typical thin-film bilayers ($< 200$ nm). Since the field screening response arises throughout the entire bilayer, it can reveal more details of the proximity-coupled bilayer\cite{Deutscher1969,Pambianchi1994PRB,Belzig1996PRB,deGennes1999,JKim2005PRB} that are not directly available to the other techniques. It is also important to note that the screening response study does not require specialized surface preparation which is critical for many of the other techniques. \par The distinct capabilities of the Meissner screening study on the proximity-coupled system have been previously demonstrated on conventional normal (N) / superconductor (S) bilayer systems such as Cu (N) / Nb (S).\cite{Hook1976JLTP,Simon1984PRB,Kanoda1987PRB,Mota1989JLowTemp,Claassen1991PRB,Pambianchi1994PRB,Pambianchi1995PRB,Onoe1995JPSJ,Pambianchi1996PRB,Pambianchi1996PRB2} It can reveal the spatial distribution of the order parameter and the magnetic field profile throughout the film, as well as their evolution with temperature. From such information, superconducting characteristic lengths such as the normal coherence length $\xi_\text{N}$ and normal penetration depth $\lambda_\text{N}$ of the proximity-coupled normal layer can be estimated. The study can also reveal thickness dependent proximity-coupling behavior, which helps to estimate the thickness of the surface states ($t_{\text{TSS}}$) for TI/SC bilayers. The $\xi_\text{N}$, $\lambda_\text{N}$, and $t_\text{TSS}$ of a proximity-coupled TI layer determine the radius of a vortex, the maximum spacing between vorticies in a lattice, and the minimum thickness of the TI layer. Such information is required to avoid intervortex tunneling of MBSs, which would result in a trivial fermionic state.\cite{MCheng2010PRB} \par Compared to other high frequency electromagnetic techniques such as THz optical measurement, the advantage of the microwave Meissner screening study for investigating the properties of a TI/SC bilayer is that the energy of a 1 GHz microwave photon ($\approx 4$ $\mu$eV) is a marginal perturbation to the system. On the other hand, the energy of a 1 THz optical photon ($\approx 4$ meV) is comparable to the gap energy ($\leq 3$ meV) of typical superconductors used in TI/SC systems such as Nb, Pb, Al, NbSe$_2$, and YB$_6$.\cite{Kittel,Clayman1971SSC,Kadono2007PRB} Therefore, the microwave screening study is an ideal method to study details of the induced order parameter in TI/SC bilayers. \par In this article, we conduct a microwave Meissner screening study on SmB$_6$/YB$_6$: a strong candidate for topological Kondo insulator / superconductor bilayer systems. The existence of the insulating bulk in SmB$_6$ is currently under debate.\cite{Menth1969PRL,NXu2014NatComm,Syers2015PRL,Tan2015Science,Laurita2016PRB,YXu2016PRL,JingdiZhang2018PRB,YunsukEo2018ArXiv} From measurements of the temperature dependence of the Meissner screening with a systematic variation of SmB$_6$ thickness, this study shows evidence for the presence of an insulating bulk region in the SmB$_6$ thin films. Through a model of the electrodynamics, the study also provides an estimation for the characteristic lengths of the bilayer system including the thickness of the surface states. \section{Experiment} \par SmB$_6$/YB$_6$ bilayers were fabricated through a sequential sputtering process without breaking the vacuum to ensure a pristine interface between SmB$_6$ and YB$_6$ for ideal proximity coupling. The details of sample fabrication can be found in the Appendix Sec. \ref{SampleGrowth}. The geometry of the bilayers is schematically shown in Fig. \ref{fig:Fig1}(a). The YB$_6$ film has a thickness of 100 nm and $T_c=6.1$ K obtained from a DC resistance measurement.\cite{SeunghunLee2019Nature} The thickness of SmB$_6$ layers ($t_{\text{SmB}_6}$) are varied from 20 to 100 nm for systematic study. These bilayers all have $T_c=5.8\pm0.1$ K without a noticeable $t_{\text{SmB}_6}$ dependence of $T_c$. The measurement of the effective penetration depth $\lambda_{eff}$ is conducted with a dielectric resonator setup.\cite{HakkiColeman1960,Mazierska1998IEEE,SeokjinBae2019RSI} A 3 mm diameter, 2 mm thick rutile (TiO$_2$) disk, which facilitates a microwave transmission resonance at 11 GHz, is placed on top of the sample mounted in a Hakki-Coleman type resonator.\cite{HakkiColeman1960} This resonator consists of niobium (top) and copper (bottom) plates to obtain a high quality factor for the dielectric resonance. The resonator is cooled down to the base temperature of 40 mK. As the temperature of the sample is increased from the base temperature, the change of the resonance frequency is measured, $\Delta f_0(T)=f_0(T)-f_0(T_{ref})$. $T_{ref}$ here is set to $230$ mK ($\approx 0.04T_c$ of the bilayers), below which $f_0(T)$ of the bilayers shows saturated temperature dependence. This data is converted to the change in the effective penetration depth $\Delta\lambda_{eff}(T)$ using a standard cavity perturbation theory,\cite{Klein1992JSuper,BBJin2002PRB,Ormeno2002PRL} \begin{equation} \Delta\lambda_{eff}(T) =\lambda_{eff}(T)-\lambda_{eff}(T_{ref})= -\frac{G_{geo}}{\pi \mu_0}\frac{\Delta f_0(T)}{f_0^2(T)}. \end{equation} Here, $G_{geo}$ is the geometric factor of the resonator.\cite{SeokjinBae2019RSI} \begin{figure} \includegraphics[width=1\columnwidth]{Fig1_v5.jpg} \caption{\label{fig:Fig1} (a) A schematic of the bilayer consisting of an SmB$_6$ film and a YB$_6$ film. A parallel microwave magnetic field ($H_0$) is applied to the top surface of the SmB$_6$ layer (red arrows). (b) Temperature dependence of the effective penetration depth $\Delta\lambda_{eff}(T)$ of the SmB$_6$/YB$_6$ bilayers for various SmB$_6$ layer thickness ($t_{\text{SmB}_6}$). (c) $\Delta\lambda_{eff}(T)$ of a Cu/Nb (conventional metal / superconductor) bilayers\cite{Pambianchi1996PRB} for various Cu layer thickness ($t_{\text{Cu}}$). The dashed lines are the model fits.\cite{Pambianchi1996PRB} } \end{figure} \par Fig. \ref{fig:Fig1}(b) shows $\Delta\lambda_{eff}(T)$ for the SmB$_6$ (N) / YB$_6$ (S) bilayers for various SmB$_6$ layer thickness $t_{\text{SmB}_6}$. The single layer YB$_6$ thin film (i.e., $t_{\text{SmB}_6}=0$) shows temperature independent behavior below $T/T_c<0.2$. This is not only consistent with the BCS temperature dependence of $\Delta\lambda(T)$ for a spatially homogeneous, fully-gapped superconductor,\cite{Abrikosov1988,Prozorov2006SST} but also consistent with previous observations on YB$_6$ single crystals.\cite{Kadono2007PRB,Tsindlekht2008PRB} However, once the SmB$_6$ layer is added, $\Delta\lambda_{eff}(T)$ clearly shows temperature dependence below $T/T_c<0.2$. Here, the important unconventional feature is that the low temperature profile of $\Delta\lambda_{eff}(T)$ for the SmB$_6$/YB$_6$ bilayers shows only a marginal $t_{\text{SmB}_6}$ dependence. This is in clear contrast to the case of the Cu (N) / Nb (S) bilayers shown in Fig. \ref{fig:Fig1}(c). The $\Delta\lambda_{eff}(T)$ for this conventional metal/superconductor bilayer system shows considerable evolution as the normal layer thickness $t_{\text{Cu}}$ increases. This is because when the decay length of the induced order parameter $\xi_\text{N}(T)$ decreases with increasing temperature, the thicker (thinner) normal layer undergoes a larger (smaller) change in the spatial distribution of the order parameter, and hence the spatial profile of the screening. Therefore, the marginal $t_{\text{SmB}_6}$ dependence of $\Delta\lambda_{eff}(T)$ for the SmB$_6$/YB$_6$ bilayer implies that even though $t_{\text{SmB}_6}$ is increased, the actual thickness of the proximity-coupled screening region in the SmB$_6$ layer remains roughly constant. \section{Model} \par To quantitatively analyze this unconventional behavior, an electromagnetic screening model for a proximity-coupled bilayer is introduced.\cite{Pambianchi1994PRB,Pambianchi1995PRB,Pambianchi1996PRB,Pambianchi1996PRB2} The model solves Maxwell's equations combined with the second London equation for the current and field inside the bilayer with appropriate boundary conditions at each temperature (See Appendix \ref{Screening model equation}), to obtain the spatial profile of the magnetic field $H(z,T)$ and the current density $J(z,T)$ as a function of temperature,\cite{Pambianchi1994PRB} where $z$ denotes the coordinate along the sample thickness direction as depicted in Fig. \ref{fig:Fig1}(a). From the obtained field and current profiles, one can obtain the total inductance $L(T)$ of the bilayer as \begin{equation} \label{inductance} \begin{split} L(T) &=\frac{\mu_0}{H_0^2} \int_{-t_\text{S}}^{0} \left[H^2(z,T)+\lambda_\text{S}^2(T)J^2(z,T)\right]dz \\ &+ \frac{\mu_0}{H_0^2}\int_{0}^{+d_\text{N}} \left[H^2(z,T)+\lambda_\text{N}^2(z,T)J^2(z,T)\right] dz \\ &+ \frac{\mu_0}{H_0^2}\int_{+d_\text{N}}^{+t_\text{N}} \left[H^2(z)\right] dz, \end{split} \end{equation} from which one can obtain an effective penetration depth from the relation $L(T)=\mu_0\lambda_{eff}(T)$. Here, $H_0$ is the amplitude of the applied microwave magnetic field at the top surface of the normal layer (see Fig. \ref{fig:Fig1}(a)), $\lambda_\text{S}$ ($\lambda_\text{N}$) is local penetration depth of the superconductor (normal layer), $t_\text{S}$ is the thickness of the superconductor, $t_\text{N}$ (N=SmB$_6$ or Cu) is the total thickness of the normal layer, and $d_\text{N}$ ($\leq t_\text{N}$, integration limit of the second and third terms in Eq. (\ref{inductance})) is the thickness of the proximity-coupled region in the normal layer, which is assumed to be temperature independent. In Eq.(\ref{inductance}), $H^2$ is proportional to field stored energy and $\lambda^2J^2$ is proportional to kinetic stored energy of the supercurrent. The first, second, and third integration terms come from the superconductor, the proximity-coupled part of the normal layer, and the uncoupled part of the normal layer, respectively. \par A schematic view of the order parameter profile in the bilayers is shown in Fig. \ref{fig:Fig2}. As seen in Fig. \ref{fig:Fig2}(a), for a conventional metal, $d_\text{N}$ is the same as $t_\text{N}$ since the entire normal layer is uniformly susceptible to induced superconductivity, and thus the third integration term in Eq. \ref{inductance} becomes zero. However, as seen in Fig. \ref{fig:Fig2}(b), if there exists an insulating bulk region blocking the propagation of the order parameter up to the top surface in the normal layer (as in the case of a thick TI), only the bottom conducting surface adjacent to the superconductor is proximity-coupled. In this case, $d_\text{N}$ becomes the thickness of the bottom conducting surface states. The third integration term in Eq. (\ref{inductance}), which accounts for the uncoupled portion of the normal layer, becomes non-zero. However, this third term can be removed by taking $\Delta L(T)$ into account since the un-coupled SmB$_6$ region has temperature-independent microwave properties below 3 K\cite{Sluchanko2000PRB}, whereas the temperature range of the measurement here extends below 2 K. \begin{figure} \includegraphics[width=1\columnwidth]{Fig2_v2.jpg} \caption{\label{fig:Fig2} (a) Schematic spatial profile of the order parameter $\Delta_\text{N,S}$ (blue) and the local penetration depth $\lambda_\text{N,S}$ (red) through the normal layer (N) / superconductor (S) bilayer sample for the case of the absence of an insulating bulk. $z$ is the thickness direction coordinate and $t_\text{N}$ ($t_\text{S}$) is the thickness of the normal layer (superconductor). The proximitized thickness $d_\text{N}$ is equal to the normal layer thickness $t_\text{N}$. (b) In the presence of an insulating bulk, $d_\text{N} < t_\text{N}$ since the insulating bulk blocks propagation of the order parameter to the top surface. Note that the microwave magnetic field is applied to the right surfaces. } \end{figure} \par The spatial dependence of screening of the proximity-coupled normal layer is imposed by that of the induced order parameter $\Delta_\text{N}$ (Fig. \ref{fig:Fig2}(a)), which can be approximated by an exponential decay profile $\Delta_\text{N}(z,T)=\Delta_\text{N}(0,T)e^{-z/\xi_\text{N}(T)}$ in terms of the normal coherence length $\xi_\text{N}(T)$.\cite{deGennes1999} The position dependent normal penetration depth is inversely proportional to the order parameter $\lambda_\text{N}\sim1/\Delta_\text{N}$\cite{Deutscher1969JChemSol} so its position dependence is expressed as $\lambda_\text{N}(z,T)=\lambda_\text{N}(0,T)e^{z/\xi_\text{N}(T)}$. Here, the temperature dependence of $\lambda_\text{N}$ at the interface is assumed to follow that of the superconductor\cite{Simon1981PRB} $\lambda_\text{N}(0,T)/\lambda_\text{N}(0,0)=\lambda_\text{S}(T)/\lambda_\text{S}(0) \cong 1+\sqrt{\pi\Delta_0/2k_BT}\exp(-\Delta_0/k_BT)$, which is the asymptotic behavior below 0.3$T_c$ for a fully-gapped superconductor.\cite{Abrikosov1988,Prozorov2006SST} \par For the temperature dependence of the screening in the normal layer, $\xi_\text{N}(T)$ plays a crucial role since it determines the spatial distribution of $\Delta_\text{N}(z,T)$. If the sample is in the clean limit, the temperature dependence of the normal coherence length is given by $\xi_\text{N} = \hbar v_F/2\pi k_B T$, where $v_F$ denotes the Fermi velocity of the N layer. In the dirty limit, it is given by $\xi_\text{N} = \sqrt{\hbar v_F l_\text{N}/6\pi k_B T}$,\cite{Deutscher1969} where $l_\text{N}$ denotes the mean-free path of the N layer. For the model fitting, the simplified expressions $\xi^{clean}_\text{N}(T)=\xi^{clean}_\text{N}(T_0)\times T_0/T$ and $\xi^{dirty}_\text{N}(T)=\xi^{dirty}_\text{N}(T_0)\times \sqrt{T_0/T}$ are used, with $\xi_\text{N}(T_0)$ as a fitting parameter. Here, $T_0$ is an arbitrary reference temperature of interest. Note that the divergence of $\xi_\text{N}(T)$ as $T\rightarrow0$ should be cut off below a saturation temperature due to the finite thickness of the normal layer, which is theoretically predicted,\cite{Falk1963,Deutscher1969} and also experimentally observed from magnetization studies on other bilayer systems.\cite{Mota1989JLowTemp,Onoe1995JPSJ} In our measurements, the effect of this saturation of $\xi_\text{N}(T)$ can be seen from the sudden saturation of the $\Delta\lambda_{eff}(T)$ data below $0.04T_c$ (see Fig. \ref{fig:Fig1}(b) and Fig. \ref{fig:Fig3}(b-d)). Therefore, only the data obtained in a temperature range of $T/T_c\geq0.04$ is fitted, where the $\Delta\lambda_{eff}(T)$ data indicates that $\xi_\text{N}$ is temperature dependent. \par A given set of these parameters $\lambda_\text{S}(0)$, $\lambda_\text{N}(0,0)$, $\xi_\text{N}(T_0)$, and $d_\text{N}$ determines a model curve of $\Delta\lambda_{eff}(T)$. Therefore, by fitting the experimental data to a model curve, one can determine the values of these characteristic lengths. This screening model has successfully described $\Delta\lambda(T)$ behavior of various kinds of normal/superconductor bilayers.\cite{Pambianchi1995PRB,Pambianchi1996PRB,Pambianchi1996PRB2} \section{Results} \label{Result} \par As seen in Fig. \ref{fig:Fig3}(a), the model is first applied to fit $\Delta\lambda_{eff}(T)$ of a single layer YB$_6$ thin film (i.e., no SmB$_6$ layer on the top) to obtain $\lambda_\text{S}(0)$: the simplest case where one needs to consider only the first term in Eq. (\ref{inductance}). Here, the data in a temperature range of $T < 1.6$ K ($\approx 0.28 T_c$ of the SmB$_6$/YB$_6$ bilayers) is fitted to avoid the contribution from the niobium top plate to $\Delta f_0(T)$. The best fit is determined by finding the fitting parameters that minimize the root-mean-square error $\sigma$ of $\Delta\lambda_{eff}(T)$ between the experimental data and the model fit curves. The best fit gives $\lambda_\text{S}(0)=227 \pm 2$ nm (The determination of the error bar is described in Appendix \ref{ErrorbarDet}). A comparison between the estimated $\lambda_\text{S}(0)$ of the YB$_6$ thin film and that obtained in other work is discussed in the Appendix \ref{YB6 penetration depth} \par We now fix the value of $\lambda_\text{S}(0)$ of the YB$_6$ layer and focus on extracting the characteristic lengths of the induced superconductivity of the bilayers. Recent PCS measurements on a series of SmB$_6$/YB$_6$ bilayers\cite{SeunghunLee2019Nature} help to reduce the number of fitting parameters: the point contact measurement on the bilayer with $t_{\text{SmB}_6}=20$ nm at 2 K showed perfect Andreev reflection, i.e., conductance doubling at the interface between a metal tip and the top surface of the SmB$_6$, indicating that the entire 20 nm thick SmB$_6$ layer is proximity-coupled. Therefore, $d_\text{N}$ is fixed to 20 nm when fitting the $\Delta\lambda_{eff}(T)$ data of the bilayer with $t_{\text{SmB}_6}=20$ nm. \begin{figure} \includegraphics[width=1\columnwidth]{Fig3_v3.jpg} \caption{\label{fig:Fig3} $\Delta\lambda_{eff}(T)$ vs. $T/T_c$ data and fits for SmB$_6$/YB$_6$ bilayers at low temperature, $T/T_c<0.3$. (a) The single layer YB$_6$ (100 nm) ($t_{\text{SmB}_6}=0$ nm). The magenta points are data, and the blue line is a fit from the electromagnetic screening model. (b) The bilayer with $t_{\text{SmB}_6}=20$ nm. The blue line is a fit with the clean limit temperature dependence of $\xi_\text{N}(T)$, and the red line is a fit with the dirty limit temperature dependence. (c) and (d) The bilayers with $t_{\text{SmB}_6}=40$ nm and $100$ nm, respectively.} \end{figure} \par The fitting is conducted with the clean and the dirty limit temperature dependence of $\xi_\text{N}(T)$ as shown in Fig. \ref{fig:Fig3}(b). The clean limit fit (blue) gives $\xi^{clean}_\text{N}(2\text{K})=52\pm1$ nm, $\lambda_\text{N}(0,0)=340\pm2$ nm with $\sigma$ of 0.237. On the other hand, the dirty limit fit (red) gives $\xi^{dirty}_\text{N}(2\text{K})=262\pm180$ nm, $\lambda_\text{N}(0,0)=505\pm7$ nm with $\sigma$ of 0.780. According to the fitting result, not only does the dirty limit fit apparently deviate from the data points, but also the $\sigma$ of the dirty limit is three times larger than that of the clean limit, implying that the clean limit is more appropriate for describing $\xi_\text{N}(T)$ of the SmB$_6$ layer. Henceforth, the $\Delta\lambda_{eff}(T)$ data for the bilayers with other $t_{\text{SmB}_6}$ is fit using the clean limit temperature dependence of $\xi_\text{N}$. Also, the obtained value of $\xi_\text{N}$(2K) = 52 nm will be used when the data of the bilayers with other $t_{\text{SmB}_6}$ is fitted, as the Fermi velocity of the surface bands, which determines the value of $\xi_\text{N}$, does not have a clear TI layer thickness dependence.\cite{SuYangXu2014NatPhys} \begin{table} \centering \begin{tabular}{c |c|c|c} \hline \hline \multirow{2}{*}{Characteristic lengths} & \multicolumn{3}{c}{SmB$_6$ layer thickness} \\ \cline{2-4} & 20 nm & 40 nm & 100 nm \\ \hlinewd{1.5pt} $\xi_\text{N}$(2K) (nm)& $52\pm 1$ & 52$^*$ & 52$^*$ \\ \hline $d_\text{N}$ (nm) & 20$^*$ & $8\pm 2$ & $ 10\pm 1$ \\ \hline $\lambda_\text{N}(0,0)$ (nm) & $340\pm 2$ &$159\pm 2$ & $207\pm 2$ \\ \hline \hline \end{tabular} \caption{\label{table2} Summary of the extracted characteristic lengths from the electrodynamic screening model for TI/SC bilayers for different SmB$_6$ layer thickness. All fits on the bilayers assume $\lambda_S(0) = 227$ nm which is obtained from the fitting on the single layer YB$_6$. Note that the values with the asterisk are fixed when the fitting is conducted. } \end{table} \par For the bilayers with $t_{\text{SmB}_6}=40$ and $100$ nm, $d_\text{N}$ is now set to be a free fitting parameter. As seen from Fig. \ref{fig:Fig3}(c) and (d), the resulting fit line gives $d_\text{N}=8\pm2$ nm, $\lambda_\text{N}(0,0)=159\pm2$ nm for the bilayer with $t_{\text{SmB}_6}=40$ nm, and $d_\text{N}=10\pm1$ nm, $\lambda_\text{N}(0,0)=207\pm2$ nm for the bilayer with $t_{\text{SmB}_6}=100$ nm. The estimated $d_\text{N} \approx 9$ nm is much smaller than $t_{\text{SmB}_6}$, which is consistent with the absence of induced order parameter in the top surface of 40 and 100 nm thick SmB$_6$ layers measured by point contact spectroscopy.\cite{SeunghunLee2019Nature} A summary of the estimated characteristic lengths $\xi_\text{N}$(2K), $d_\text{N}$, and $\lambda_\text{N}(0,0)$ for the case of 20, 40, and 100 nm thick SmB$_6$ layers on top of YB$_6$ is presented in Table. \ref{table2}. \section{Discussion} \par We now discuss the implications of these results and propose a microscopic picture for the proximity coupled bilayers. The important implication of the above results is the absence of Meissner screening in the bulk of proximity-coupled SmB$_6$, which is consistent with the existence of an insulating bulk region inside the SmB$_6$ layer. If the entire SmB$_6$ layer is conducting without an insulating bulk inside, the proximity-coupled thickness $d_\text{N}$ should be equal to $t_{\text{SmB}_6}$ for thicker films too, considering the long normal coherence length of $\approx52$ nm. In that case, as $t_{\text{SmB}_6}$ increases, one would expect a continuous evolution of stronger $\Delta\lambda(T)$ as seen in the Cu/Nb system (Fig. \ref{fig:Fig1}(c)), which is not observed in Fig. \ref{fig:Fig1}(b). Also, the estimated $d_\text{N}\approx9$ nm for the bilayers with $t_{\text{SmB}_6}$= 40 and 100 nm is much smaller than half of $t_{\text{SmB}_6}$. As illustrated in Fig. \ref{fig:Fig4}(a), this situation can only be explained if a thick insulating bulk region of $t_{\text{bulk}}\approx 22$ and $82$ nm exist in the bilayers with $t_{\text{SmB}_6}=$40 and 100 nm respectively. \par This thick insulating bulk provides a spatial separation between the top and bottom surface conducting states, not allowing the order parameter to propagate to the top surface. Thus, only the bottom surface states are proximitized in the $t_{\text{SmB}_6}=$40 and 100 nm cases, and hence one can conclude that the proximitized thickness $d_\text{N}\approx9$ nm equals the thickness of the surface states $t_{\text{TSS}}$. Note that this confirmation of the presence of the insulating bulk in the TI layer cannot be made solely from the PCS study. Even if the PCS study observed the absence of the order parameter on the top surface of the TI layer (SmB$_6$ in this case), it could be either due to an insulating bulk, or due to a short normal coherence length $\xi_\text{N} < t_{\text{SmB}_6}$. The large value of $\xi_\text{N}=52$ nm, which is larger than $t_{\text{SmB}_6}=40$ nm, rules out the latter scenario and confirms the presence of an insulating bulk inside the SmB$_6$ layers. \par This picture is also consistent with the observation that the entire SmB$_6$ layer with $t_{\text{SmB}_6}=20$ nm is proximity-coupled (Fig. \ref{fig:Fig4}(b)); the top and the bottom conducting surface state wavefunctions are likely to be weakly overlapped based on $2t_{\text{TSS}}\approx t_{\text{SmB}_6}$ through the exponentially decaying profile (Fig. \ref{fig:Fig4}(b)). Thus the induced order parameter is able to reach to the top surface states, giving $d_\text{N}=20$ nm for this case. Although such overlap is expected to open a hybridization gap in the surface states, the fact that 20 nm SmB$_6$ on YB$_6$ is entirely proximity-coupled implies that the opened gap is much smaller than the energy difference between the Fermi level of SmB$_6$ and the Dirac point. Note that topological protection might not be affected by such weak hybridization, provided that the Fermi level is sufficiently far away from the Dirac point present in thick SmB$_6$.\cite{SuYangXu2014NatPhys} \begin{figure} \includegraphics[width=1\columnwidth]{Fig4_v16.jpg} \caption{\label{fig:Fig4} Schematic view (not to scale) of the proposed position dependence of the surface states wavefunction $|\psi_\text{TSS}(z)|$ (black) and induced order parameter $\Delta_\text{N}(z)$ (red) in the SmB$_6$/YB$_6$ bilayer. The $|\psi_\text{TSS}(z)|$ is also visualized by the blue gradations. The sketches are based on the estimated proximity-coupled thickness $d_\text{N} \approx 9$ nm and the normal coherence length $\xi_\text{N}(2\text{K}) = 52$ nm for the case of $t_{\text{SmB}_6}$= (a) 40 nm, and (b) 20 nm. } \end{figure} \par Besides confirming the existence of an insulating bulk in the SmB$_6$ layer, the extracted fitting parameters based on the electromagnetic model provide an estimate for the important characteristic lengths such as $\xi_\text{N}$, $\lambda_\text{N}$, and $t_\text{TSS}$, as seen from Sec. \ref{Result}. These estimates can be utilized in designing a TI/SC device such as a vortex MBS device. $\xi_\text{N}$ determines the radius of the vortex core $r_v$. In the mixed state above the first critical field, $\lambda_\text{N}$ determines the maximum spacing $R_v$ between neighboring vortices in the vortex lattice.\cite{Tinkham1996} The ratio $r_v/R_v$ determines the overlap of the two adjacent MBSs. The overlap of the wavefunctions of the two MBSs results in intervortex tunneling, which splits the energy level of the MBSs away from the zero energy and make them trivial fermionic excitations,\cite{MCheng2010PRB} \begin{equation} \Delta E_\text{split} \sim \frac{1}{\sqrt{k_FR_v(\lambda_\text{N})}}\exp\left(-\frac{R_v(\lambda_\text{N})}{r_v(\xi_\text{N})}\right). \end{equation} Therefore, information on $\xi_\text{N}$ and $\lambda_\text{N}$ helps to evaluate how secure the MBSs of a device will be against the intervortex tunneling. \par $t_{\text{TSS}}$ determines a minimum required thickness of the device. If the thickness of the device is too thin ($t_{\text{SmB}_6}\sim t_{\text{TSS}}$), the wavefunction overlap between the top and bottom surface states becomes significant, which opens a large hybridization gap up to the Fermi level. As a result, the surface states lose not only the electric conduction but also lose the spin-momentum locking property.\cite{SuYangXu2014NatPhys} In this case, MBS is not hosted in the vortex core, and hence a thickness larger than the estimated $2t_{\text{TSS}}$ is recommended. These discussions show how the characteristic lengths extracted from the Meissner screening study serve as a guideline to design a vortex MBS device with TI/SC bilayer systems. \section{Conclusion} \par In summary, a microwave Meissner screening study is introduced and utilized to investigate the spatially dependent electrodynamic screening response and the corresponding properties of the TI/SC bilayers. The advantages of the study in investigating the properties of a TI/SC system is demonstrated by the measurement and modeling of the temperature dependence of the screening with systematic TI-layer thickness variation. The study goes beyond the surface response to examine the screening properties of the entire TI layer, and uncovers the existence of an insulating bulk in the TI layer conclusively. Also, the study provides an estimate for characteristic lengths of the TI/SC bilayer, which sheds light on the design of a vortex MBS device providing guidelines for the radius of the vortex core, the energy level splitting due to the intervortex tunneling, and the thickness of the device. With its versatile capabilities, the microwave Meissner screening study can serve as a standard characterization method for a variety of TI/SC systems before using them as building blocks in topological quantum computation. \begin{acknowledgments} The authors thank Yun-Suk Eo and Valentin Stanev for helpful discussions. This work is supported by NSF grant No. DMR-1410712, DOE Office of High Energy Physics under Award No. DE-SC 0012036T (support of S.B.), Office of Basic Energy Science, Division of Material Science and Engineering under Award No. DE-SC 0018788 (measurements), ONR grant No. N00014-13-1-0635, AFOSR grant No. FA 9550-14-10332 (support of S.L., X.Z., and I.T.), and the Maryland Center for Nanophysics and Advanced Materials. \end{acknowledgments}
{ "timestamp": "2019-09-17T02:05:32", "yymm": "1904", "arxiv_id": "1904.06620", "language": "en", "url": "https://arxiv.org/abs/1904.06620" }
\section{Introduction \label{SEC-INTRO}} A crucial issue in verification methods is application to large-scale scientific or industrial computations on supercomputers. Many numerical solvers have been proposed for modern massively parallel supercomputers, and application researchers would like to compare solvers both in terms of computational speed and reliability. The concept of a posteriori verification methods is proposed in order to meet the needs of application researchers. A posteriori verification methods have the workflow shown in Fig.~\ref{FIG-SCHEMATIC}. An approximate solution is first obtained and then verified. The former and latter procedures are referred to as a solver and a verifier, respectively. The goal of the present study is to integrate the verifier routine, as an optional function, to existing numerical solver libraries. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{fig-schematic.eps} \end{center} \caption{Schematic diagram of the workflow with an a posteriori verification method. } \label{FIG-SCHEMATIC} \end{figure} The present research is motivated by large-scale electronic state calculation, a major field in computational material science and engineering. As explained in \ref{SEC-GHEV}, a mathematical model is used for the fundamental Schr\"{o}dinger-type equation, and the problem is reduced to the generalized real-symmetric matrix eigenvalue problem \begin{eqnarray} A x_k = \lambda_k B x_k \label{EQ-QM-GEP} \end{eqnarray} under the generalized orthogonality condition \begin{eqnarray} x_i^{\rm T} B x_j = \left\{\begin{array}{ll} 1 & \mathrm{if} \ i = j \\ 0 & \mathrm{otherwise} \end{array}\right., \label{EQ-ORTHO-CON} \end{eqnarray} where both $A$ and $B$ are real-symmetric $n \times n$ matrices, with $B$ being positive definite. Here, we assume that \[ \lambda_1 \le \lambda_2 \le \dots \le \lambda_n . \] Applying our results to problems with complex Hermitian matrices is straightforward. For large-scale electronic state calculation, many eigenvalues are densely clustered or almost degenerate, and distinguishing them numerically may be difficult. In order to obtain reliable results, we consider verification methods for generalized eigenvalue problems. For the sake of completeness as verification methods, we also need to take into account all numerical errors that occur when matrices $A$ and $B$ are generated from the fundamental Schr\"{o}dinger-type equation. Although we do not consider the fundamental Schr\"{o}dinger-type equation in detail herein, we briefly discuss this equation in \ref{SEC-GHEV}. One of the authors (T. H.) developed a middleware EigenKernel \cite{EIGENKERNEL-URL, IMACHI2016-JIP, 2018TANAKA} with various parallel solvers for generalized eigenvalue problems and plans to add a verifier routine. The total elapsed time $T_{\rm tot}$ is the sum of the times for solver $T_{\rm sol}$ and verifier $T_{\rm veri}$ ($T_{\rm tot} = T_{\rm sol} + T_{\rm veri}$). We attempt to construct the verifier algorithm so that the time for the verifier gives a moderate fraction ($T_{\rm veri} \le T_{\rm sol}$). Since the verifier can use the highly optimized routines of matrix multiplication, the verifier is suitable for high-performance computing on supercomputers. In the solver procedure, approximate solutions $(\ap{\lambda}_k, \ap{x}_k)$, $k = 1, 2, \dots, n$, such that \begin{eqnarray} A \ap{x}_k \approx \ap{\lambda}_k B \ap{x}_k \label{EQ-QM-GEP-APPROX} \end{eqnarray} are obtained by any numerical solver algorithm. A verifier procedure gives the difference between the exact and approximate solutions, such as $|\lambda_k - \ap{\lambda}_k|$ or $\norm{x_k - \ap{x}_k}$. If the relation $|\lambda_k - \ap{\lambda}_k| \le r_k$ is obtained with a given positive number $r_k$, for example, this indicates that the exact solution ($\lambda_k$) lies in a disk having a center and radius of $\ap{\lambda}_k$ and $r_k$, respectively. For this purpose, a number of enclosure methods have been developed, e.g., \cite{Ya2001,MiOgRuOi2010,Mi2012}. In the present paper, we propose a method of enclosing all eigenvalues that is straightforward, efficient, and easy to implement on supercomputers. The proposed method is based on Yamamoto's theorem \cite{Ya1984} and is essentially the same as the method proposed in a previous paper \cite{Mi2012}. In other words, we specialize the previous method \cite{Mi2012} for generalized real-symmetric eigenvalue problems. Note that it is not possible in general to state that a method is better or worse than other methods because this depends on the purpose. We compare the advantages and disadvantages of these enclosure methods in Section~\ref{SEC-VERIF-METHODS}. The a posteriori verification strategy is important mainly with regards to three aspects. First, numerical methods for the densely clustered eigenvalue problem have potential difficulties in computing reliable numerical solutions, as explained above. Second, various numerical algorithms have been proposed for efficient parallel computations that are suitable for current and next-generation supercomputers. Application researchers would like to compare these methods with respect to both computational speed and numerical reliability. Third, the emergence of machine learning has enhanced the design of computer architecture for the acceleration of low-precision (single- or half-precision) calculation. The efficient use of low-precision calculation, typically in mixed-precision calculation, will be important in any high-performance computational science field \cite{DONGARRA2018-HPCASIA, ALVERMANN2018}. A posteriori verification methods guarantee satisfactory numerical reliability when low-precision calculation is used. The remainder of the present paper is organized as follows. Section ~\ref{SEC-BACKGROUND} explains the physical and mathematical backgrounds. The proposed verification method and numerical examples are presented in Sections \ref{SEC-VERIF-METHODS} and \ref{SEC-NUM-EXAMPLE}, respectively. Section \ref{SEC-SUMMARY} presents a summary and an outlook for future research. \section{Background \label{SEC-BACKGROUND}} \subsection{Large-scale electronic state calculation and densely clustered eigenvalue problem \label{SEC-CLUSTERED}} The present electronic state calculation is briefly introduced in \ref{SEC-GHEV}. The matrix size $n$ is approximately proportional to the number of the atoms, molecules, or electrons in the material. An eigenvalue $(\lambda_k)$ or an eigenvector $(x_k)$ indicates the energy and the wavefunction, respectively, of an electron. The present research is motivated, in particular, by a previous study \cite{HOSHI2018-PENTA}, in which we focused on the participation ratio \cite{BELL1970,1996FUJIWARA} defined for a vector $v \equiv (v_1, v_2, ...., v_n)$, as \begin{eqnarray} P \equiv P(v) = \left( \sum_{j=1}^n |v_j|^4 \right)^{-1}. \end{eqnarray} The participation ratio is a measure of the spatial extension of the electronic wavefunction and governs the electronic device properties. A dense eigenvector, {\it i.e.}, a vector that has only a few components that are negligible in terms of absolute value, has a large participation ratio. The corresponding electronic wavefunction is extended through the material and can contribute to electrical current. A sparse eigenvector, {\it i.e.}, a vector that has only a few components that are large in terms of absolute value, indicates a small participation ratio. The corresponding electronic wavefunction is localized in the material and cannot contribute to electrical current. An interesting research target in a large-scale problem is an \lq intermediate' electronic wavefunction or a wavefunction that shows intermediate properties between extended and localized wavefunctions. Such \lq intermediate' wavefunctions appear, for example, in Fig. 1 of Ref.~\cite{1996FUJIWARA} or Fig. 3 of Ref.~\cite{HOSHI2018-PENTA}. The densely clustered eigenvalue problem in (\ref{EQ-QM-GEP}) appears among large-scale calculations and is illustrated in Fig.~\ref{FIG-CLUSTERED}. In the problem, the difference of sequential eigenvalues $\delta_k \equiv \lambda_{k+1} - \lambda_{k}$, $k = 1, 2, \dots, n-1$, tends to be proportional to $1/n$ $(\delta_k \propto 1/n)$. Consequently, many eigenvalues are densely clustered or almost degenerate ($\delta_k \rightarrow 0$) in a large-matrix problem $(n \rightarrow \infty)$ and distinguishing these eigenvalues numerically may be difficult. It is crucial to distinguish each eigenvalue numerically among densely clustered eigenvalues, because the participation ratio and other physical quantities are defined for each eigenvector. If two calculated eigenvalues $\hat{\lambda}_{k}$ and $\hat{\lambda}_{k+1}$ cannot be distinguished in the numerical calculation, or if the two eigenvalues are recognized, unphysically, to be degenerate, then the corresponding eigenvectors $\hat{x}_k$ and $\hat{x}_{k+1}$ cannot be defined uniquely. In this case, the participation ratio values $P(\hat{x}_k)$ and $P(\hat{x}_{k+1})$ are not defined uniquely and any discussion of these values will be meaningless. \begin{figure}[h] \begin{center} \includegraphics[width=0.4\textwidth]{fig_clustered.eps} \end{center} \caption{Schematic diagram of a densely clustered eigenvalue problem in large-scale electronic state calculations. (a), (b) Eigenvalue distribution in (\ref{EQ-QM-GEP}) with (a) small or (b) large matrix size $n$. A cross indicates an eigenvalue on the real axis. The difference of sequential eigenvalues tends to be proportional to $1/n$. (c), (d) Materials with (c) small or (d) matrix size $n$. Ovals indicate molecules. The matrix size $n$ is proportional to the size of the molecules $n_{\rm mol}$ ($n \propto n_{\rm mol}$). } \label{FIG-CLUSTERED} \end{figure} \subsection{Numerical solvers for the generalized eigenvalue problem \label{SEC-SOLVER-GEV}} Here, an overview is given for the parallel dense-matrix solver of the generalized eigenvalue problem of (\ref{EQ-QM-GEP}), in particular, for the variety of used algorithms. The solver algorithm for (\ref{EQ-QM-GEP}) consists of four procedures: (i) Cholesky decomposition of $B$ \begin{eqnarray} B=R^{\top}R, \end{eqnarray} with an upper triangular matrix $R$, (ii) reduction to the standard eigenvalue problem (SEP) \begin{eqnarray} A^{\prime}y_k=\lambda_k y_k, \label{EQ-RED-SEP} \end{eqnarray} with \begin{eqnarray} A^{\prime} \equiv R^{-\top}AR^{-1}, \end{eqnarray} (iii) solution of the standard eigenvalue problem (\ref{EQ-RED-SEP}), and (iv) transformation of the eigenvectors \begin{eqnarray} x_k = R^{-1}y_k. \end{eqnarray} The set of procedures (i), (ii), and (iv) is referred to as reducer, and procedure (iii) is referred to as the SEP solver. Although ScaLAPACK \cite{SCALAPACK-URL, SCALAPACK-BOOK} is the {\it de facto} standard parallel numerical library, this library was developed mainly in the 1990s, and several routines exhibit severe bottlenecks on modern massively parallel supercomputers. Novel solver libraries of ELPA \cite{ELPA-URL, ELPA2014} and EigenExa \cite{EIGENEXA-URL,EigenExa-PAPER} were proposed in order to overcome the bottlenecks. The ELPA code was developed in Europe under tight collaboration between computer scientists and material science researchers, and its main target application is FHI-aims \cite{FHI-AIM-URL, FHI-AIM-PAPER}, which is a well-known electronic state calculation code. The EigenExa code, on the other hand, was developed at RIKEN in Japan. Importantly, the ELPA code has routines optimized for x86, IBM Blue-Gene, and AMD architectures, whereas the EigenExa code was developed to be optimal mainly on the K computer, which is a Japanese flagship supercomputer. Both ScaLAPACK and ELPA provide the reducer routines, and all of ScaLAPACK, ELPA, and EigenExa provide the SEP solver routines. Since the computational performance depends both on the problem and the architecture, it is, in principle, possible to construct a `hybrid' workflow in which the reducer routine is chosen from one library and the SEP solver routine is chosen from another library, so as to realize optimal performance. The middleware EigenKernel was developed in order to realize such hybrid workflows. An obstacle to realizing the hybrid workflow is the difference of matrix distribution schemes between different libraries. EigenKernel provides data conversion routines between libraries and surmounts this obstacle. Figure \ref{FIG-WORKFLOW-EK} shows the possible workflows for a future version of EigenKernel with the a posteriori verification routine. The SEP solvers and the reducers in Fig. \ref{FIG-WORKFLOW-EK} are briefly explained. The SEP solver and the two reducers in ScaLAPACK are the traditional routines. The SEP solvers of `ELPA1' and `Eigen\verb|_s|' are also based on the traditional algorithm with tridiagonalization. The other two solvers `ELPA2' and `Eigen\verb|_sx|' and the reducer in ELPA are based on non-traditional algorithms for better performance in massive parallelism. The detailed algorithms for these routines are found in Ref.~\cite{2018TANAKA}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\textwidth]{fig_workflow_EigenKernelw.eps} \end{center} \caption{Schematic diagram of the possible hybrid workflows for a future version of EigenKernel with the a posteriori verification routine. Two routines in ScaLAPACK and one routine in ELPA are available for the reducer, whereas one routine in ScaLAPACK, two routines in ELPA, and two routines in EigenExa are available for the SEP solver. The a posteriori verification routine is commonly used among the workflows. } \label{FIG-WORKFLOW-EK} \end{figure} \subsection{Verified numerical computations} \label{ssec:VNC} We briefly explain how to obtain mathematically rigorous numerical results using floating-point arithmetic. Let $\mathbb{F}$ and $\mathbb{IF}$ be sets of floating-point numbers and intervals, respectively. We use bold-faced letters for interval matrices, the elements of which are intervals. For an interval matrix $\mathbf{C}$, $C_{\mathrm{inf}}$ and $C_{\mathrm{sup}}$ denote the left and right endpoints, respectively, such that ${\bf C}=[C_{\mathrm{inf}},C_{\mathrm{sup}}]$, i.e., $\mathbf{C}_{ij} = [(C_{\mathrm{inf}})_{ij},(C_{\mathrm{sup}})_{ij}]$ for all $(i,j)$ pairs, which is known as ``inf-sup'' form. In addition, $C_{\mathrm{mid}}$ and $C_{\mathrm{rad}}$ denote the midpoint and the radius of ${\bf C}$, respectively, such that $\mathbf{C} = [C_{\mathrm{mid}} - C_{\mathrm{rad}},C_{\mathrm{mid}} + C_{\mathrm{rad}}]$, which is known as ``mid-rad'' form. Let $\mathit{fl}(\cdot)$, $\mathit{fl}_\bigtriangledown(\cdot) $, and $\mathit{fl}_\bigtriangleup(\cdot) $ be computed results by floating-point arithmetic as defined in IEEE 754 with rounding to the nearest (roundTiesToEven), rounding downwards (roundTowardNegative), and rounding upwards (roundTowardPositive), respectively. For a given matrix $C = (c_{ij}) \in \R^{n \times n}$, the notation $\abs{C}$ indicates $\abs{C} = (\abs{c_{ij}}) \in \R^{n \times n}$, and the same applies to vectors, i.e., the absolute value is taken componentwise. Next, we review basic interval matrix multiplication (cf.~\cite{Ru1999}). For two point matrices $P, Q \in \mathbb{F}^{n \times n}$, the matrix product $PQ \in \mathbb{R}^{n \times n}$ can be enclosed as \begin{equation} PQ \in [\mathit{fl}_\bigtriangledown (PQ), \ \mathit{fl}_\bigtriangleup (PQ)], \label{eq:pp} \end{equation} where two matrix multiplications are required. For a point matrix $P \in \mathbb{F}^{n \times n}$ and an interval matrix ${\bf Q} \in \mathbb{IF}^{n \times n}$, the product $P{\bf Q}$ can efficiently be enclosed using mid-rad form of ${\bf Q}$ as \begin{equation} P{\bf Q} \subset [\mathit{fl}_\bigtriangledown(PQ_{\mathrm{mid}} - T), \ \mathit{fl}_\bigtriangleup (PQ_{\mathrm{mid}} + T)], \quad T = \mathit{fl}_\bigtriangleup (|P|Q_{\mathrm{rad}}), \label{eq:pi} \end{equation} which involves three matrix multiplications. Although the inf-sup form can also be used for calculating the enclosure of $P{\bf Q}$, the inf-sup form cannot be written with products of point matrices simply, so that it is much more difficult for the inf-sup form to achieve high-performance in practice, as compared to the mid-rad form~\cite{Ru1999}. If $\bf Q$ is given by the inf-sup form $[Q_{\mathrm{inf}},Q_{\mathrm{sup}}]$, we can easily transform $\bf Q$ into the mid-rad form, for example, by \[ Q_{\mathrm{mid}} = \mathit{fl}_\bigtriangleup((Q_{\mathrm{inf}} + Q_{\mathrm{sup}})/2), \quad Q_{\mathrm{rad}} = \mathit{fl}_\bigtriangleup(Q_{\mathrm{mid}} - Q_{\mathrm{inf}}), \] which satisfies $[Q_{\mathrm{inf}},Q_{\mathrm{sup}}] \subset [Q_{\mathrm{mid}}-Q_{\mathrm{rad}},Q_{\mathrm{mid}}+Q_{\mathrm{rad}}]$. There exist several implementations of the above interval arithmetic for matrix multiplication, e.g., C-XSC~\cite{C-XSC}, a C++ library, and INTLAB~\cite{INTLAB}, a Matlab/Octave toolbox for verified numerical computations. Both C-XSC and INTLAB share the common feature that they use Basic Linear Algebra Subprograms (BLAS) routines. In other words, we can efficiently implement interval matrix multiplication using PBLAS, the parallel version of BLAS, on distributed computing environments, as long as directed rounding in floating-point operations is available in BLAS routines for matrix multiplication and the reduction operation of summation. \section{A posteriori verification methods \label{SEC-VERIF-METHODS}} \subsection{Possible verification methods \label{SEC-VERIF-METHODS-GENERAL}} Possible verification methods are discussed here. In order to measure the accuracy of the computed solution $(\ap{\lambda}_{k},\ap{x}_k)$, application researchers often compute a norm of the residual vector, such as \[ \frac{\normtwo{A \ap{x}_k - \ap{\lambda}_k B \ap{x}_k}}{\normtwo{\ap{x}_k}} . \] Although this quantity usually suffices to check whether the solver works correctly, it does not verify the accuracy of the computed eigenvalue. The following inequality is a known residual bound \cite{MiOgRuOi2010}: \begin{eqnarray} \min_{1 \le j \le n}\abs{\lambda_{j} - \ap{\lambda}_{k}} \le \sqrt{\normtwo{B^{-1}}}\frac{\normtwo{A \ap{x}_k - \ap{\lambda}_k B \ap{x}_k}}{\sqrt{\trans{\ap{x}_k}B\ap{x}_k}}, \label{EQ-GEP-BOUND} \end{eqnarray} which is straightforwardly derived from Wilkinson's bound~\cite{Wi1961} for the standard eigenvalue problem. From the bound \eqref{EQ-GEP-BOUND}, we can confirm that some eigenvalue of $(A,B)$ exists in the neighborhood of $\ap{\lambda_k}$ satisfying (\ref{EQ-GEP-BOUND}). However, we cannot determine whether $\ap{\lambda}_{k}$ is an approximation of the $k$-th eigenvalue of $(A,B)$. In order to understand the electronic state of the problems correctly, it is crucial to determine the order of eigenvalues \cite{LeHoSoMiZh2018}. To our knowledge, we have the following two approaches to determine the order of eigenvalues of symmetric matrices: \begin{itemize} \item[(a)] Compute all eigenpairs (pairs of eigenvalues and eigenvectors) and verify the error bounds of all computed eigenvalues (cf.,\ e.g.\, \cite{MiOgRuOi2010,Mi2012}). \item[(b)] Compute an approximation $\ap{\lambda}_{k}$ of a target eigenvalue using Sylvester's law of inertia with $\mathrm{LD\trans{L}}$ decomposition \cite{LeHoSoMiZh2018}, and verify that $\ap{\lambda}_{k}$ is an approximation of the $k$-th eigenvalue with an error bound (cf.,\ e.g.\, \cite{Ya2001}). \end{itemize} The advantages and disadvantages of each approach from a practical point of view are as follows: \begin{itemize} \item Approach (a) is simpler, numerically more stable, and easier to implement than Approach (b). \item Approach (a) can straightforwardly use highly optimized routines for matrix multiplication and eigenvalue decomposition. \item Approach (a) cannot exploit the sparsity of $A$ and $B$, whereas Approach (b) can to a certain extent. \end{itemize} In the present paper, we adopt Approach (a) from the aspect of simplicity and efficiency of code development on supercomputers. \subsection{Proposed method} We attempt to obtain componentwise error bounds for computed eigenvalues $\ap{\lambda}_{k}$, $k = 1, 2, \dots, n$. Let $X, D \in \R^{n \times n}$ denote a matrix comprising all generalized eigenvectors of $(A,B)$ and a diagonal matrix of the corresponding generalized eigenvalues such that \[ X = [x_{1},x_{2},\dots,x_{n}], \quad D = \mathrm{diag}(\lambda_{1},\lambda_{2},\dots,\lambda_{n}) . \] Let $I$ denote the $n \times n$ identity matrix. Then, we have \[ \left\{\begin{array}{l} AX = BXD, \\ \trans{X}BX = I . \end{array}\right. \] Let $\ap{X} = [\ap{x}_{1},\ap{x}_{2},\dots,\ap{x}_{n}] \in \R^{n \times n}$ and $\ap{D} = \mathrm{diag}(\ap{\lambda}_{1},\ap{\lambda}_{2},\dots,\ap{\lambda}_{n}) \in \R^{n \times n}$ be approximations of $X$ and $D$, respectively. Suppose $\ap{X}$ is nonsingular. Then, \[ A\ap{X} \approx B\ap{X}\ap{D}, \quad \trans{\ap{X}}B\ap{X} \approx I \quad \Rightarrow \quad \ap{X}^{-1}B^{-1}A\ap{X} \approx \ap{D}, \quad (B\ap{X})^{-1} \approx \trans{\ap{X}} . \] Since $\ap{X}^{-1}B^{-1}A\ap{X}$ is a similarity transformation of $B^{-1}A$, the eigenvalues of $\ap{X}^{-1}B^{-1}A\ap{X}$ are the same as those of $B^{-1}A$, and thus the generalized eigenvalues of $(A,B)$. Here, we attempt to compute an inclusion of $\ap{X}^{-1}B^{-1}A\ap{X}$. To this end, we introduce Yamamoto's theorem for verified solutions of linear systems. For given matrices $P = (p_{ij}), Q = (q_{ij}) \in \R^{n \times n}$, the notation $P \le Q$ indicates $p_{ij} \le q_{ij}$ for all $(i,j)$, and the same applies to vectors, i.e., the inequality holds componentwise. Moreover, define $e \equiv \trans{(1,1,\dots,1)} \in \R^{n}$. \begin{theorem}[Yamamoto \cite{Ya1984}] \label{th:yamamoto} Let $A$ and $C$ be real $n \times n$ matrices, and let $b$ and $\ap{x}$ be real $n$-vectors. If $\norminf{I - CA} < 1$, then $A$ is nonsingular, and \[ \abs{A^{-1}b - \ap{x}} \le \abs{C(b - A\ap{x})} + \frac{\norminf{C(b - A\ap{x})}}{1 - \norminf{I - CA}}\abs{I - CA}e . \] \end{theorem} \noindent In practice, we adopt an approximate inverse of $A$ as $C$ in Theorem~\ref{th:yamamoto}. In order to apply Yamamoto's theorem to componentwise error bounds for computed eigenvalues with the Gershgorin circle theorem, we present a variant of Yamamoto's theorem. \begin{theorem} \label{th:proposed} Let $A$, $B$, $C$, and $\ap{X}$ be real $n \times n$ matrices. If $\norminf{I - CA} < 1$, then $A$ is nonsingular, and \[ \abs{A^{-1}B - \ap{X}}e \le \abs{C(B - A\ap{X})}e + \frac{\norminf{C(B - A\ap{X})}}{1 - \norminf{I - CA}}\abs{I - CA}e . \] \end{theorem} \begin{proof} In a similar manner to the derivation of Yamamoto's theorem, and noting that for $P \in \R^{n \times n}$, $\abs{P}e \le \norminf{P}e$, we have \begin{eqnarray*} \abs{A^{-1}B - \ap{X}}e &=& \abs{(CA)^{-1}C(B - A\ap{X})}e \\ &\le& \abs{(CA)^{-1}}\cdot\abs{CR}e, \quad R \equiv B - A\ap{X} \\ &=& \abs{(I - (I - CA))^{-1}}\cdot\abs{CR}e = \abs{I + G + G^{2} + \cdots}\cdot\abs{CR}e, \quad G \equiv I - CA \\ &\le& \abs{CR}e + \abs{G}(I + \abs{G} + \abs{G}^{2} + \cdots)\abs{CR}e \\ &\le& \abs{CR}e + \norminf{CR}\abs{G}(I + \abs{G} + \abs{G}^{2} + \cdots)e \\ &\le& \abs{CR}e + \frac{\norminf{CR}}{1 - \norminf{G}}\abs{G}e, \end{eqnarray*} which proves the theorem. \end{proof} We now consider a linear system $(B\ap{X})Y = A\ap{X}$ for $Y$. Then, we can regard $\ap{D}$ as its approximate solution and $\trans{\ap{X}}$ as an approximate inverse of $B\ap{X}$. Let $Y$, $R$, and $G$ be defined as \begin{equation} \label{eq:RGdef} \left\{\begin{array}{l} R \equiv \trans{\ap{X}}(A\ap{X} - B\ap{X}\ap{D}), \\ G \equiv \trans{\ap{X}}B\ap{X} - I . \end{array}\right. \end{equation} If $\norminf{G} < 1$, applying Theorem~\ref{th:proposed} to the linear system $(B\ap{X})Y = A\ap{X}$ yields \begin{equation} \label{eq:estimate} \abs{\ap{X}^{-1}(B^{-1}A)\ap{X} - \ap{D}}e = \abs{(B\ap{X})^{-1}(A\ap{X}) - \ap{D}}e \le \abs{R}e + \frac{\norminf{R}}{1 - \norminf{G}}\abs{G}e \equiv r . \end{equation} Recall that $\lambda_{i}$, $i = 1, 2, \dots, n$, are the eigenvalues of $B^{-1}A$. For $\Lambda \equiv \{\lambda_{1},\lambda_{2},\dots,\lambda_{n}\}$, the Gershgorin circle theorem implies \begin{equation} \label{eq:veig} \Lambda \subseteq \bigcup_{k = 1}^{n}[\ap{\lambda}_{i} - r_{i}, \ap{\lambda}_{i} + r_{i}] . \end{equation} If all the disks $[\ap{\lambda}_{i} - r_{i}, \ap{\lambda}_{i} + r_{i}]$ are isolated, then all of the eigenvalues are separated, i.e., each disk contains precisely one eigenvalue of $B^{-1}A$~\cite[pp.~71ff]{Wi1965}, as shown schematically in Fig.~\ref{FIG-VERIFIED-SOLUTION}. If several disks are overlapped such that $|\ap{\lambda}_{k+1} - \ap{\lambda}_{k}| > r_k + r_{k+1}$ for some $k$, then some of the eigenvalues are degenerate or nearly degenerate. Moreover, if $B$ is ill-conditioned, then the $B$-orthogonality of $\ap{X}$ may break down such that $\norminf{G} \ge 1$. In such a case, Theorem~\ref{th:proposed} cannot be applied, and the verification procedure must end in failure. Therefore, we need to check whether $\norminf{G} < 1$ in code development from the verification method. In \cite{Mi2012}, a similar method has been proposed, which is essentially the same as the proposed method. The main difference between the method in \cite{Mi2012} and the proposed method is that the former focuses on the non-symmetric case and is more general. On the other hand, the proposed method is specialized for the symmetric case, i.e., we can avoid complex arithmetic including the verification procedure and compute an approximate inverse of $B\ap{X}$ by utilizing $\trans{\ap{X}} \approx (B\ap{X})^{-1}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.35\textwidth]{fig-verified-solution.eps} \end{center} \caption{Schematic diagram of verified solution when all of the disks are separated, and each disk contains precisely one eigenvalue ($\lambda_k \in [\ap{\lambda}_k- r_k, \ap{\lambda}_k + r_k ]$). } \label{FIG-VERIFIED-SOLUTION} \end{figure} \subsection{Code development \label{SEC-VERIF-CODE}} We explain how to obtain an upper bound of the vector $r$ in \eqref{eq:estimate} using only floating-point arithmetic. We first attempt to obtain upper bounds $G'$ and $R'$ of $| R | = | \hat{X}^{\top}(A\hat{X}- B\hat{X} \hat{D})|$ and $| G | = | \hat{X}^{\top} B \hat{X}-I |$ in (\ref{eq:RGdef}) such that $\abs{G} \le G'$ and $\abs{R} \le R'$ as follows: \begin{enumerate} \item ${\bf C} \leftarrow B \hat{X}$ \% Two matrix multiplications based on (\ref{eq:pp}) \item ${\bf F} \leftarrow \hat{X}^{\top}{\bf C}$ \% Three matrix multiplications based on (\ref{eq:pi}) \item ${\bf W} \leftarrow {\bf F} - I$ \% Negligible cost, $W_{\mathrm{inf}}\equiv\mathit{fl}_\bigtriangledown (F_{\mathrm{inf}}-I), \ W_{\mathrm{sup}}\equiv\mathit{fl}_\bigtriangleup (F_{\mathrm{sup}}-I)$ \item $| G | \le \max(|W_{\mathrm{inf}}|,|W_{\mathrm{sup}}|) \equiv G'$ \item ${\bf F} \leftarrow A \hat{X}$ \% Two matrix multiplications based on (\ref{eq:pp}) \item ${\bf C} \leftarrow {\bf C} \hat{D} $ \% Negligible cost because $\hat D$ is a diagonal matrix \item ${\bf C} \leftarrow {\bf F} - {\bf C} $ \% Negligible cost, \ $C_{\mathrm{inf}}$ is overwritten by $\mathit{fl}_\bigtriangledown (F_{\mathrm{inf}}-C_{\mathrm{sup}}), \ C_{\mathrm{sup}}$ is overwritten by $\mathit{fl}_\bigtriangleup (F_{\mathrm{sup}}-C_{\mathrm{inf}})$ \item ${\bf C} \leftarrow \hat{X}^{\top} {\bf C} $ \% Three matrix multiplications based on (\ref{eq:pi}) \item $| R | \le \max(|C_{\mathrm{inf}}|,|C_{\mathrm{sup}}|) \equiv R'$ \end{enumerate} Note that the notation `$\leftarrow$' indicates enclosure of the result. Moreover, for given matrices $P =(p_{ij}), Q = (q_{ij}) \in \mathbb{F}^{n \times n}$, the notation $\max(P,Q)$ indicates $\max(p_{ij},q_{ij})$ for all $(i,j)$ pairs, i.e., the maximum is taken componentwise. Here, five matrix multiplications are required for calculating $G'$ until Step 4, and an additional five matrix multiplications for the remaining calculations. Thus, in total, 10 matrix multiplications are required for calculating $G'$ and $R'$. Therefore, calculating $G'$ and $R'$ involves $20n^3 + \mathcal{O}(n^2)$ floating-point operations if the symmetry of $G$ is not taken into account. We compute the upper bounds of $\| R \|_\infty$ and $\| G \|_\infty$ as \[ \| R \|_\infty \le \| R' \|_\infty \le \mathit{fl}_\bigtriangleup (\| R' \|_\infty) \equiv \alpha_1, \quad \| G \|_\infty \le \| G' \|_\infty \le \mathit{fl}_\bigtriangleup (\| G' \|_\infty) \equiv \alpha_2. \] If $\alpha_2 \ge 1$, then the verification failed. Hence, we check $\alpha_2 < 1$ or $\alpha_2 \ge 1$ after Step 4. If $\alpha_2 \ge 1$, then the computation prematurely finishes without proceeding to Step 5. Otherwise, we proceed until Step 9 and obtain upper bound $r'$ of $r$ in \eqref{eq:estimate} by \begin{equation} r \le \mathit{fl}_\bigtriangleup \left( R'e + \frac{\alpha_1}{\mathit{fl}_\bigtriangledown (1-\alpha_2)}G'e \right) \equiv r'. \label{eq:radius} \end{equation} The routine \textsf{pdsygvx} in ScaLAPACK produces computed eigenvalues $\ap{\lambda}_{i}$ with $\ap{\lambda}_1 \le \ap{\lambda}_2 \le \dots \le \ap{\lambda}_n$. Therefore, if $\ap{\lambda}_{i+1} - \ap{\lambda}_{i} > r'_{i} + r'_{i+1}$ are satisfied for all $i = 1, 2, \dots, n-1$, then we can separate all of the eigenvalues and determine the order of the eigenvalues correctly. The test code was developed in the C language with the parallel libraries PBLAS and ScaLAPACK. The solver procedure uses a GEP solver routine (\textsf{pdsygvx}) in ScaLAPACK, whereas the verifier routine uses the matrix multiplication routine (\textsf{pdgemm}) in PBLAS. Note that the verifier procedure is based primarily on matrix multiplication, whereas the solver procedure consists of complicated procedures, such as Cholesky decomposition, and tridiagonalization. Therefore, the verifier procedure is expected to be moderate in terms of computational time and to be efficient in terms of parallelism, as compared to the solver procedure. \section{Numerical example \label{SEC-NUM-EXAMPLE} } \subsection{Problem \label{SEC-NUM-EXAMPLE-PROBLEM} } Numerical examples are presented in this section. All matrix eigenvalue problems stem from the electronic-state calculation software ELSES \cite{ELSES-URL, HOSHI2012-ELSES, HOSHI2016-SC16}, and the matrix data files appear in the ELSES matrix library \cite{ELSES-MATRIX-LIBRARY-URL, HOSHI2018-PENTA}. Details are explained in \ref{SEC-GHEV}. The problems calculated in this section are PPE354, PPE3594, PPE7194, PPE17994, PPE107994, VCNT22500, VCNT225000, and NCCS430080 in the ELSES matrix library. The matrices are those of systems having disordered atomic structures. Disordered systems are important for industrial applications because most industrial materials are disordered, unlike ideal crystal or periodic structures. Consequently, eigenvalues are not degenerate in all of the problems. The number in the problem name indicates the matrix dimension $n$. For example, the system PPE354 contains $n \times n$ matrices $A$ and $B$ with $n = 354$. All of the matrices $A$ and $B$ in these systems are real symmetric. The systems with the letters `PPE' are systems of organic polymers of poly-(phenylene-ethynylene) (PPE). The left-hand panel of Fig.~\ref{FIG-EigenValue-Graph}(a) shows the structural formula of PPE, and the right-hand panel of Fig.~\ref{FIG-EigenValue-Graph}(b) shows a part of the polymer in a disordered structure. The difference of the matrix size stems from the length of the polymer chain. The system of PPE354 is, for example, a polymer with $N_{m}=10$ monomers and $N_{\rm atom}= 12N_{m} = 120$ atoms. The system VCNT225000 is the system of vibrating carbon nanotube (VCNT). The system NCCS430080 is the system of nano-composite carbon solid (NCCS) \cite{HOSHI2013-JPSJ} and will be explained in the last paragraph of this section. The characteristic of the eigenvalue distribution can be captured by the following two quantities. One is the difference of sequential approximate eigenvalues $\ap{\delta}_k \equiv \ap{\lambda}_{k+1} - \ap{\lambda}_k$, $k=1,2,...,n-1$, and the other is the eigenvalue count $I(\lambda)$, which is defined on the eigenvalue axis $\lambda$ as \begin{eqnarray} I(\lambda) \equiv \sum_{k = 1}^{n} \theta(\lambda_k - \lambda ) \end{eqnarray} with the step function \begin{eqnarray} \theta(\lambda) \equiv \begin{cases} 1 & ( \lambda \ge 0) \\ 0 & ( \lambda < 0). \end{cases} \end{eqnarray} In other words, the eigenvalue count $I(\lambda)$ is the number of the eigenvalues that are smaller than $\lambda$. Here, we demonstrate the similarity and the size dependence of the eigenvalue distribution among the organic polymer systems. The organic polymers of PPE354, PPE17994, and PPE107994 are selected. Figures \ref{FIG-EigenValue-Graph}(b) and \ref{FIG-EigenValue-Graph}(c) show the normalized eigenvalue distribution $I(\lambda)/n$ among these three systems. The three polymers exhibit quite similar curves in Figs.~\ref{FIG-EigenValue-Graph}(b) and ~\ref{FIG-EigenValue-Graph}(c), and, therefore, the difference $\ap{\delta}_k$ is nearly proportional to $1/n$ $(\ap{\delta}_k \propto 1/n) $, as explained in Section~\ref{SEC-INTRO}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth]{fig-Eigenvalue-Graph.eps} \end{center} \caption{(a) Structural formula (left) and a part of the atomic structure (right) of poly-(phenylene-ethynylene) (PPE). (b) Similarity of eigenvalue distribution in PPE354 (circle), PPE17994 (square), and PPE107994 (diamond). The normalized eigenvalue counts $I(\lambda)/n$ are plotted on the eigenvalue axis $\lambda$. (c) Close-up of the local area indicated by the dotted line in (b). } \label{FIG-EigenValue-Graph} \end{figure} \subsection{Numerical results \label{SEC-NUM-EXAMPLE-RESULT} } Tables \ref{TABLE-NUM-EXAMPLE-RESULT} and \ref{TABLE-NUM-EXAMPLE-TIME} show the calculation results on the K computer. First, we focus on the numerical results for the approximate eigenvalues $\ap{\lambda}$ and its upper bound $r'$. The routine \textsf{pdsygvx} in ScaLAPACK produces $\ap{\lambda}_i$, $i = 1, 2, \dots, n$ with $\ap{\lambda}_1 \le \ap{\lambda}_2 \le \dots \le \ap{\lambda}_n$. The vector $r'$ is obtained by (\ref{eq:radius}). Here, we define the radius sum $\rho_k \equiv r'_{k+1} + r'_{k}$ for $k = 1, 2, \dots, n-1$. We find $m$ such that $\displaystyle \ap{\delta}_m - \rho_m = \min_{1 \le k \le n - 1}(\ap{\delta}_k - \rho_k)$. The items ``Difference'' and ``Radius'' in Table~\ref{TABLE-NUM-EXAMPLE-RESULT} show $\ap{\delta}_m$ and $\rho_m$, respectively. As shown in the table, $\ap{\delta}_m > \rho_m$ is satisfied in all of the problems, or all of the disks of $|\lambda_k - \ap{\lambda}_k| < r'_k$ are separated as in Fig.~\ref{FIG-VERIFIED-SOLUTION}. Thus, we can determine the order of eigenvalues in each problem. If $\ap{\delta}_k < \rho_k$ is satisfied for some $k$, then the two disks of $|\lambda_k - \ap{\lambda}_k| < r'_k$ and $|\lambda_{k+1} - \ap{\lambda}_{k+1}| < r'_{k+1}$ are overlapped and the two exact eigenvalues of $\lambda_k$ and $\lambda_{k+1}$ may degenerate. Figure~\ref{FIG-EigenValue-Graph2}(a) shows the eigenvalue difference $\{ \ap{\delta}_k \}$ and the radius sum $\{ \rho_k \}$ as a function of the eigenvalue $\{ \ap{\lambda}_k \}$ in the case of PPE107994. The radius sum satisfies $\rho_k \le 10^{-10}$ and is smaller than the difference ($\rho_k < \ap{\delta}_k$). We found the minimality $m=49,201$ and $\ap{\lambda}_{49201} \approx -0.488$, $\ap{\delta}_{49201} \approx 6.42 \times 10^{-11}$, and $\rho_{49201} \approx 9.17 \times 10^{-12}$. Figure~\ref{FIG-EigenValue-Graph2}(b) shows a close-up of Fig.~\ref{FIG-EigenValue-Graph2}(a) and contains the eigenvalue $\ap{\lambda}_{49201} \approx -0.488$. It is reasonable that the eigenvalue $\ap{\lambda}_{49201}$ appears in the region of $-0.490 < \lambda < -0.485 $, because many eigenvalues are densely clustered, and the eigenvalue count $I(\lambda)$ increases rapidly in the region, as shown in Fig.~\ref{FIG-EigenValue-Graph}(c). The same analysis was also carried out in the case of NCCS430080, which is the largest problem among the present calculations, and the results are shown in Figs.~\ref{FIG-EigenValue-Graph2}(c) and \ref{FIG-EigenValue-Graph2}(d). The radius sum is smaller than the difference ($\rho_k < \ap{\delta}_k$). \begin{table}[htb] \begin{center} \caption{Numerical example \label{TABLE-NUM-EXAMPLE-RESULT}} \begin{tabular}{|l|r|l|l|} \hline Problem name & Matrix dimension ($n$) & Difference ($\ap{\delta}_m$) & Radius sum ($\rho_m$) \\ \hline \hline PPE354 & 354 & $6.61 \times 10^{-5}$ & $4.90 \times 10^{-13}$ \\ PPE3594 & 3,594 & $1.03 \times 10^{-7}$ & $1.33 \times 10^{-12}$ \\ PPE7194 & 7,194 & $5.55 \times 10^{-8}$ & $1.18 \times 10^{-12}$ \\ PPE17994 & 17,994 &$5.32 \times 10^{-11}$ & $2.56 \times 10^{-12}$ \\ PPE107994 & 107,994 & $6.42 \times 10^{-11}$ & $9.17 \times 10^{-12}$ \\ VCNT22500 & 22,500 & $2.59 \times 10^{-7}$ & $3.20 \times 10^{-10}$ \\ VCNT225000 & 225,000 & $1.97 \times 10^{-9}$ & $1.64 \times 10^{-9}$ \\ NCCS430080 & 430,080 & $5.10 \times 10^{-9}$ & $1.61 \times 10^{-9}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[width=0.7\textwidth]{fig-Eigenvalue-Graph2.eps} \end{center} \caption{(a) Plot of the eigenvalue difference $\{ \ap{\delta}_k \}$ and the radius sum $\{ \rho_k \}$ as a function of the eigenvalue $\{ \ap{\lambda}_k \}$ in the case of PPE107994. The arrow indicates $\ap{\delta}_m$. (b) Close-up of (a). The arrow indicates $\ap{\delta}_m$. (c) Plot of the eigenvalue difference $\{ \ap{\delta}_k \}$ and the radius sum $\{ \rho_k \}$ as a function of the eigenvalue $\{ \ap{\lambda}_k \}$ in the case of NCCS430080. The arrow indicates $\ap{\delta}_m$. (d) Close-up of (c). The arrow indicates $\ap{\delta}_m$. } \label{FIG-EigenValue-Graph2} \end{figure} Table~\ref{TABLE-NUM-EXAMPLE-TIME} shows the computational times. The item $T_{\rm sol}$ in Table~\ref{TABLE-NUM-EXAMPLE-TIME} shows the computing time for \textsf{pdsygvx} in ScaLAPACK. The item $T_{\rm veri}$ shows the computing time for the verification process, mainly, the time for matrix multiplications. Here, the verifier consumes a moderate cost ($T_{\rm veri} \le T_{\rm sol}$), as expected in Section~\ref{SEC-VERIF-CODE}. More intensive benchmarks, including weak scaling, will be carried out in the future. \begin{table}[bht] \begin{center} \caption{Elapsed times among the problems. The number of used processor nodes $P$ and the elapsed times for solver $T_{\rm sol}$ and verifier $T_{\rm veri}$ are shown. \label{TABLE-NUM-EXAMPLE-TIME}} \begin{tabular}{|l|r|r|r|} \hline Problem name & $P$ & $T_{\rm sol}$ & $T_{\rm veri}$ \\ \hline \hline PPE354 & 4 & 0.32 & 0.12 \\ PPE3594 & 4 & 20.74 & 4.73 \\ PPE7194 & 4 & 118.84 & 31.74 \\ PPE17994 & 16 & 217.91 & 105.75 \\ PPE107994 & 600 & 1009.85 & 682.92 \\ VCNT22500 & 64 & 105.75 & 59.06 \\ VCNT225000 & 2025 & 2625.76 & 1775.09 \\ NCCS430080 & 6400 & 8960.03 & 3496.56 \\ \hline \end{tabular} \end{center} \end{table} In conclusion, the verification procedure delivers the intervals that contain the exact eigenvalues ($|\lambda_k - \ap{\lambda}_k| < r'_k$) with the approximate eigenvalues $\ap{\lambda}_k$ and the radius $r'_k$. We plan to upload the radius data files in ELSES matrix library, as well as the input matrix data and the approximate eigenvalue data. Then, a graph similar to Fig.~\ref{FIG-EigenValue-Graph2} can be drawn in order to measure the accuracy of the computed solutions. Finally, the present numerical results are discussed in the context of computational physics. The matrix problem of NCCS430080, the largest matrix problem in the present paper, appears in a previous paper on a nano-composite carbon solid \cite{HOSHI2013-JPSJ}. In general, carbon can form diamond and graphite crystals. The material is composed of graphite-like and diamond-like domains. Figure~\ref{FIG-NPD-WFN} shows an example of the electronic wavefunction (the highest occupied electronic wavefunction or the wavefunction of the electron that has the highest energy). The atomic structure of Fig.~\ref{FIG-NPD-WFN} is that of Fig. 2(a) of Ref.~\cite{HOSHI2013-JPSJ}. (See Ref.~\cite{HOSHI2013-JPSJ} for details.) In the present context, Fig.~\ref{FIG-NPD-WFN} indicates that the wavefunction is an intermediate wavefunction, as explained in Section~\ref{SEC-CLUSTERED}, and lies in the boundary region between graphite-like and diamond-like domains. The a posteriori verification procedure confirms that all of the eigenvalues are distinguished numerically, and the above physical discussion regarding each wavefunction is meaningful. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\textwidth]{fig-NPD-wfn.eps} \end{center} \caption{Example of an electronic wavefunction of a nano-composite carbon solid \cite{HOSHI2013-JPSJ}. The wavefunction $\phi(\bm{r})$ is drawn by the two isosurfaces painted green and yellow. The isovalues are $\phi(\bm{r}) = \pm C$ ($C>0$). The wavefunction is \lq intermediate', because it exhibits intermediate properties between extended and localized wavefunctions. } \label{FIG-NPD-WFN} \end{figure} \section{Summary and overview \label{SEC-SUMMARY}} The present paper proposes an a posteriori verification method for the generalized eigenvalue problems that appear in large-scale electronic state calculations. The verification procedure gives a rigorous mathematical foundation of numerical reliability. In particular, the present result guarantees that all of the approximate eigenvalues $\{ \hat{\lambda}_k \}_k$ are well separated and that the participation ratio value $\{ P(\hat{x}_k) \}_k$ and any physical quantity defined for each eigenvector are meaningful. Since the verification procedure consists of simple matrix multiplications, the computational cost is moderate, as compared with that of the solver procedure. Therefore, application researchers can use the verification function with only a moderate increase of the computational cost. Test calculations were carried out on the K computer for real problems with a matrix size of up to $n \approx 4 \times 10^5$. The next stage of research is the integration of the present verifier routine and solver routines in EigenKernel, in which we can use various solver routines among ScaLAPACK and newer libraries and can compare their approximate solutions in the verification procedure. Future issues are realizing (i) the verification of eigenvectors and (ii) the refinement of approximate eigenpairs. The refinement procedure will be crucial, in particular, when lower-precision arithmetic, such as half-precision or single-precision arithmetic, is used for calculating an approximate solution as an initial guess. For example, refinement algorithms for the symmetric eigenvalue problem have recently been proposed in \cite{OgAi2018,OgAi2019}, which are based on matrix multiplications. Such refinement algorithms enhance application researchers to use lower-precision arithmetic with satisfactory reliability of the computed results, which will be of great importance in next-generation architecture that is optimized for lower-precision arithmetic. \section*{Acknowledgement} The authors wish to thank the anonymous referees for their valuable comments, which helped to improve our paper significantly. The present study was supported in part by MEXT as Exploratory Issue 1-2 of the Post-K (Fugaku) computer project ``Development of verified numerical computations and super high-performance computing environment for extreme researches'' using computational resources of the K computer provided by the RIKEN R-CCS through the HPCI System Research project (Project ID: hp180222) and Priority Issue 7 of the Post-K computer project and by JSPS KAKENHI Grant Numbers 16KT0016, 17H02828, and 19H04125.
{ "timestamp": "2020-02-27T02:10:36", "yymm": "1904", "arxiv_id": "1904.06461", "language": "en", "url": "https://arxiv.org/abs/1904.06461" }
\section{Background} We recall a classic theorem of Erd\H{o}s and Gallai~\cite{erdHos1959maximal}. \begin{theorem}[Erd\H{o}s, Gallai~\cite{erdHos1959maximal}] Let $n$, $k$ be positive integers and let $G$ be an $n$-vertex graph containing no path of $k$ edges, then \[ e(G) \le \frac{(k-1)n}{2}. \] Equality is obtained if and only if $k$ divides $n$ and $G$ is the graph consisting of $n/k$ disjoint complete graphs of size $k$. \end{theorem} Erd\H{o}s and S\'os~\cite{erdos1984some} conjectured that the same bound would hold for any tree with $k$ edges. A proof of this conjecture for sufficiently large $k$ was announced in the 90's by Ajtai, Koml\'os, Simonovits and Szemer\'edi. We will consider a variant of this problem in the setting of hypergraphs and multi-hypergraphs. We obtain exact results for the case of large uniformity. Given a hypergraph $\mathcal{H}$, we denote the vertex and edge sets of $\mathcal{H}$ by $V(\mathcal{H})$ and $E(\mathcal{H})$, respectively. We denote the number of vertices and hyperedges by $v(\mathcal{H}) = \abs{V(\mathcal{H})}$ and $e(\mathcal{H}) = \abs{E(\mathcal{H})} $. A hypergraph is said to be $r$-uniform if all of its hyperedges have size $r$. We now provide some definitions which we will need. \begin{definition} For a given uniformity $r$ and a fixed graph $G$, an $r$-uniform multi-hypergraph $\mathcal{H}$ is a \emph{Berge copy} of G, if there exists an injection $f_1: V(G) \to V(\mathcal{H})$ and a bijection $f_2:E(G) \to E(\mathcal{H})$, such that if $e = \{v_{1},v_{2}\}\in E(G)$, then $\{f_1(v_{1}),f_1(v_{2})\} \subseteq f_2(e)$. The set of Berge copies of $G$ is denoted by $\mathcal{B} G$. The sets $f_1(V(G))$ and $f_2(E(G))$ are called the \emph{defining} vertices and hyperedges, respectively. \end{definition} We recall the classical definition of the Tur\'an number of a family of hypergraphs. \begin{definition} The Tur\'an number of a family of $r$-uniform hypergraphs $\mathcal{F}$, denoted $\ex_r(n,\mathcal{F})$, is the maximum number of hyperedges in an $n$-vertex, $r$-uniform, simple-hypergraph which does not contain an isomorphic copy of $\mathcal{H}$, for all $\mathcal{H} \in \mathcal{F}$, as a sub-hypergraph. \end{definition} The same question may be asked for multi-hypergraphs, we denote the Tur\'an number for multi-hypergraphs by $\ex_r^{multi}(n,\mathcal{F})$. \begin{remark} If every hypergraph in $\mathcal{F}$ has at least $r+1$ vertices, then $\ex_r^{multi}(n,\mathcal{F})$ is infinite, since a hypergraph on $r$ vertices and multiple copies of the same hyperedge is $\mathcal{F}$-free. \end{remark} The classical theorem of Erd\H{o}s and Gallai was extended to Berge paths in $r$-uniform hypergraphs by Gy\H{o}ri, Katona and Lemons~\cite{gyorikatonalemons}. \begin{theorem}[Gy\H{o}ri, Katona, Lemons \cite{gyorikatonalemons}] \label{gkl} Let $n,k,r$ be positive integers and let $\mathcal{H}$ be an $r$-uniform hypergraph with no Berge path of length $k$. If $k>r+1>3$, we have \begin{displaymath} e(\mathcal{H}) \le \frac{n}{k} \binom{k}{r}. \end{displaymath} If $r \ge k>2$, we have \begin{displaymath} e(\mathcal{H}) \le \frac{n(k-1)}{r+1}. \end{displaymath} \end{theorem} The remaining case when $k = r + 1$ was settled later by Davoodi, Gy\H{o}ri, Methuku and Tompkins~\cite{davoodi}, the Tur\'an number matches the upper bound of Theorem~\ref{gkl} in the $k>r+1$ case. We now turn our attention to the case of trees in hypergraphs. The Tur\'an number of certain kinds of trees in $r$-uniform hypergraphs has long been a major topic of research. For example, there is a notoriously difficult conjecture of Kalai \cite{kalai} which is more general than the Erd\H{o}s-S\'os conjecture. The trees which Kalai considers are generalizations of the notion of tight paths in hypergraphs. In another direction, F\"uredi \cite{fur} investigated linear trees, constructed by adding $r-2$ new vertices to every edge in a (graph) tree. In this setting, he proved asymptotic results for all uniformities at least $4$. Whereas, the articles above considered classes of trees containing tight and linear paths, respectively, we will consider the setting of Berge trees. In the range when $k > r$, a number of results on forbidding Berge trees were obtained by Gerbner, Methuku and Palmer in~\cite{bigk}. In particular they proved that if we assume the Erd\H{o}s-S\'os conjecture holds for a tree $T$ with $k$ edges and all of its sub-trees and also that $k>r+1$, we have $\ex_r(n,\mathcal{B} T) \le \frac{n}{k}\binom{k}{r}$ (a construction matching this bound when $k$ divides $n$ is given by $n/k$ disjoint copies of the complete $r$-uniform hypergraph on $k$ vertices). In the present paper, we will consider the range $r>k$, where we prove some exact results. \section{Main Results} Considering multi-hypergraphs, we prove the following. \begin{theorem} \label{Multi_tree_theorem} Let $n,k,r$ be positive integers and let $T$ be a $k$-edge tree, then for all $r\geq (k-1)(k-2)$, \begin{displaymath} \ex_r^{multi}(n,\mathcal{B} T)\leq \frac{n(k-1)}{r}. \end{displaymath} If $r > (k-1)(k-2)$ and $T$ is not a star, equality holds if and only if $r$ divides $n$ and the extremal multi-hypergraph is $\frac{n}{r}$ disjoint hyperedges, each with multiplicity $k-1$. If $T$ is a star equality holds only for all $(k-1)$-regular multi-hypergraphs. \end{theorem} We conjecture that Theorem \ref{Multi_tree_theorem} holds for the following wider set of parameters. \begin{conjecture} \label{treeconj} Let $n,k,r$ be positive integers and let $T$ be a $k$-edge tree, then for all $r \ge k+1$, \begin{displaymath} \ex_r^{multi}(n,\mathcal{B} T)\leq \frac{n(k-1)}{r}. \end{displaymath} For all trees $T$, where $T$ is not a star, equality holds if and only if $r$ divides $n$ and the extremal multi-hypergraph is $\frac{n}{r}$ disjoint hyperedges each with multiplicity $k-1$. \end{conjecture} The special case of Conjecture \ref{treeconj}, when the forbidden tree is a path, was settled by Gy\H{o}ri, Lemons, Salia and Zamora~\cite{new} (see the first corollary). We now define a class of hypergraphs which we will need when we classify the extremal examples in our main result about simple hypergraphs, Theorem~\ref{Not_Star_Theorem}. \begin{definition} An $r$-uniform hypergraph $\mathcal{H}$ is \emph{two-sided} if $V(\mathcal{H})$ can be partitioned into a set $X$ and pairwise disjoint sets $A_i$, $i=1,2,\dots,t$ (also disjoint from $X$) of size $r-1$, such that every hyperedge is of the form $\{x\} \cup A_i$ for some $x \in X$. We say that a two-sided $r$-uniform hypergraph is $(a,b)$-regular if every vertex of $X$ has degree $a$ and every vertex of $\displaystyle\bigcup_{i=1}^{t}A_i$ has degree $b$. \end{definition} \begin{remark} A two-sided $r$-uniform hypergraph can also be viewed as a graph obtained by taking a bipartite graph $G$ with bipartite classes $X$ and $Y$, and ``blowing up" each vertex of $Y$ to a set of size $r-1$, and replacing each edge $\{x,y\}$ by the $r$-hyperedge containing $x$ together with the blown up set for $y$. \end{remark} \begin{theorem} \label{Not_Star_Theorem} Let $n,k,r$ be positive integers and let $T$ be a $k$-edge tree which is not a star, then for all $r\geq k(k-2)$, \begin{displaymath} \ex_r(n,\mathcal{B} T)\leq \frac{n(k-1)}{r+1}. \end{displaymath} Equality holds if and only if $r+1$ divides $n$, and the extremal hypergraph is obtained from $\frac{n}{r+1}$ disjoint sets of size $r+1$, each containing $k-1$ hyperedges. Unless $k$ is odd, and $T$ is the balanced double star, where the balanced double star is the tree obtain from and edge by adding $\frac{k-1}{2}$ incident edges to each of the ends of the edge, in which case equality holds if and only if $r+1$ divides $n$ and $\mathcal{H}$ is obtained from the disjoint union of sets of size $r+1$ containing $k-1$ hyperedges each and possibly a $(k-1,\frac{k-1}{2})$-regular two-sided $r$-uniform hypergraph (see Figure~\ref{Extremal}). \end{theorem} \begin{figure}[t] \centering \begin{tikzpicture}[scale = 0.65,every node/.style={scale=.8}] \draw (0,3); \filldraw[blue,fill opacity=0.3] (2,0) arc (0:360: 2cm and 2cm); \draw (0,0) node[above]{$r+1$ vertices}; \draw (0,0) node[below] {$k-1$ hyperedges}; \begin{scope}[xshift=5cm] \filldraw[blue,fill opacity=0.3] (2,0) arc (0:360: 2cm and 2cm); \draw (0,0) node[above]{$r+1$ vertices}; \draw (0,0) node[below] {$k-1$ hyperedges}; \end{scope} \begin{scope}[xshift=10cm] \filldraw[blue,fill opacity=0.3] (2,0) arc (0:360: 2cm and 2cm); \draw (0,0) node[above]{$r+1$ vertices}; \draw (0,0) node[below] {$k-1$ hyperedges}; \end{scope} \draw(0,-2.5); \end{tikzpicture} \qquad \begin{tikzpicture}[rotate=90,scale = 0.65,every node/.style={scale=.8}] \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,3.68) (.04,3.83) (2.85,2.25) (3.1,2) (2.70,1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,2.18) (.04,2.33) (2.85,2.25) (3.1,2) (2.70,1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,.68) (.04,.83) (2.85,2.25) (3.1,2) (2.70,1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-0.82) (.04,-0.6) (2.85,2.25) (3.1,2) (2.70,1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-3.68) (.04,-3.83) (2.85,-2.25) (3.1,-2) (2.70,-1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-2.18) (.04,-2.33) (2.85,-2.25) (3.1,-2) (2.70,-1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-.68) (.04,-.83) (2.85,-2.25) (3.1,-2) (2.70,-1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,0.82) (.04,0.6) (2.85,-2.25) (3.1,-2) (2.70,-1.75) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,3.68) (.04,3.83) (2.85,.25) (3.1,0) (2.70,-.1) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,2.18) (.04,2.33) (2.85,.25) (3.1,0) (2.70,-.1) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-3.68) (.04,-3.83) (2.85,-.25) (3.1,0) (2.70,.1) } ; \filldraw[blue, fill opacity=0.10] plot [smooth cycle] coordinates { (0.02,-2.18) (.04,-2.33) (2.85,-.25) (3.1,0) (2.70,.1) } ; \filldraw[red] (2.8,0) circle (5pt) (2.8,2) circle (5pt) (2.8,-2) circle (5pt); \filldraw[green] (0,3.75)circle (3pt) (0,2.25)circle (3pt) (0,.75)circle (3pt) (0,-3.75)circle (3pt) (0,-2.25)circle (3pt) (0,-.75)circle (3pt); \draw (4,0) node{$\abs{A_i} =r-1$, $d(A_i) =\frac{k-1}{2} $}; \draw (-.3,0) node[below]{$d(x)=k-1$}; \end{tikzpicture} \caption{An extremal graph for Theorem~\ref{Not_Star_Theorem} is pictured. Any such graph can be obtained from disjoint copies of a sets of $r+1$ vertices with $k-1$ hyperedges and if $T$ is the balanced double star, possibly a $(k-1,\frac{k-1}{2})$-regular two-sided $r$-uniform hypergraph.} \label{Extremal} \end{figure} \section{Proofs of the main results} We start with some results about graphs. \begin{definition} For a Graph $G$, we denote by $d(G)$ the average degree of $G$, that is $d(G) = \frac{2e(G)}{v(G)}.$ \end{definition} \begin{lemma}\label{folklore} Any non-empty graph $G$ contains a subgraph $G'$ with minimum degree greater than $d(G)/2$. \end{lemma} The previous lemma is a well-known result in graph theory, which can be proved using the following lemma. \begin{lemma}\label{average} Let $G$ be a graph and $V' \subseteq V$, if $V'$ is incident with at most $\frac{d(G)}{2}\abs{V'}$ edges, then $d(G[V\setminus V']) \geq d(G)$. \end{lemma} \begin{proof} Note that if $m$ is the number of edges incident with $V'$, then we have that \begin{displaymath} 2e(G[V\setminus V']) = 2e(G) - 2m \geq d(G)v(G) - d(G)\abs{V'} = d(G)(\abs{V}-\abs{V'}) = d(G)v(G[V\setminus V']). \qedhere \end{displaymath} \end{proof} We are going to use the following fact about trees, before proving the next bound on the degrees of the vertices in clusters. \begin{claim} \label{lowdegree} If $T$ is $k$-edge tree which is not a star, then there exists a vertex of $T$ which is not a leaf and has degree at most $\frac{k+1}{2}$. \end{claim} \begin{proof} Let $T'$ be the tree obtained by $T$ by removing every leaf of $T$, since $T$ is not a star, $T'$ has at least two vertices, take any $v,w$ which are leaves in $T'$, and note that for each, every neighbor but one is a leaf, and also, since at most one the $k$ edges of $T$ is incident with both $u$ and $v$, we have that $d_T(u)+d_T(v)\leq k+1$. And so, one of these vertices have the desired properties. \end{proof} Now we introduce two more definitions which we will need in the proofs. \begin{definition} Let $\mathcal{H}$ be a (multi-)hypergraph. A $(k-1)$-\emph{cluster} is a set of $k-1$ hyperedges of $\mathcal{H}$ that intersect in at least $k-1$ vertices. The intersection of the $k-1$ hyperedges is called the \emph{core} of the $(k-1)$-cluster. The union of the $k-1$ hyperedges is called the \emph{span} of the $(k-1)$-cluster. \end{definition} \begin{definition} Let $\mathcal{H} = (V,E)$ be a multi-hypergraph. A multi-hypergraph $\mathcal{H}' = (V',E')$ is called a \emph{reduced sub-hypergraph} of $\mathcal{H}$ if $V' \subseteq V$ and there exists an injection $f:E' \to E$ such that $h \subseteq f(h)$ for all $h\in E'$. For an edge $h\in E'$ we call $f(h)\in E$ its \emph{correspondent} edge in $\mathcal{H}$. \end{definition} In the following claims, we bound the degrees of the vertices in a $(k-1)$-cluster for a hypergraph which does not contain a copy of a Berge tree. \begin{claim}\label{core} Let $n,k,r$ be positive integers, with $r \ge k+1$, and let $T$ be a $k$-edge tree. If $\mathcal{H}$ is an $r$-uniform multi-hypergraph containing no Berge copy of $T$ and $S$ is a $(k-1)$-cluster in $\mathcal{H}$, then the vertices in the core of $S$ have degree exactly $k-1$. In particular, the core vertices of $S$ are only incident with the hyperedges of $S$. \end{claim} \begin{proof} Let $C$ be the set of vertices in the core of $S$. Suppose, by contradiction, there is a vertex $v$ in $C$ with degree at least $k$, and let $T'$ be a tree obtained from $T$ by removing any two leaves $x,y$. Suppose that the neighbors of these leaves are $x'$ and $y'$ respectively (it is possible that $x'=y'$). Since $C$ has at least $k-1$ vertices and there are $k-1$ hyperedges containing all the vertices in $C$, we can greedily embed $T'$ in $C$ in such a way that $v$ takes the role of $x'$. Suppose the vertex $u$ takes the role of $y'$ in this greedy embedding. We can complete the embedding of $T$ by using the last hyperedge of S and an unused vertex in it (one exists since $r\geq k+1$) to embed $y$. Then since the degree of $v$ is at least $k$, we have a hyperedge available to embed $x$ as a unused vertex of this hyperedge. Thus we have found a Berge copy of $T$ in $\mathcal{H}$, a contradiction. \end{proof} \begin{claim}\label{Y} Let $n,k,r$ be positive integers, with $r \ge k+1$, and let $T$ be a $k$-edge tree which is not a star. If $\mathcal{H}$ is an $r$-uniform multi-hypergraph containing no Berge copy of $T$ and $S$ is a $(k-1)$-cluster of $\mathcal{H}$, then any vertex in the span of $S$ that is incident with a hyperedge not from $S$, has degree at most $\floor{\frac{k-1}{2}}$. \end{claim} \begin{proof} Since $T$ is not a star, by Claim~\ref{lowdegree}, there is a vertex $x \in V(T)$ which is not a leaf and has degree $s$, $s\leq \floor{\frac{k+1}{2}}$, such that all but one of its neighbors is a leaf, let $y$ be the neighbor of $x$ which is not a leaf. Suppose, by contradiction, there is a vertex $v$ in the span of $S$ which is incident with a hyperedge that is not in $S$ and $v$ has degree at least $\floor{\frac{k+1}{2}}$. Let $C$ be the set of vertices in the core of $S$. From Claim~\ref{core} we know that $v$ cannot be in $C$. Pick $s$ hyperedges $h_1,h_2,\dots,h_s$ incident to $v$ in such a way that $h_1$ is not in $S$ and $h_2$ is in $S$. Choose a vertex $w \in h_1$ not in $C$ (in fact, every vertex in $h_1$ is outside $C$ by Claim~\ref{core}) and $u \in h_2$ in $C$. Choose further distinct vertices $v_3,v_4,\dots,v_s$ from the hyperedges $h_3,h_4,\dots,h_s$. The vertex $v$ will be assigned to the vertex $x$ in the tree, and the vertex $u$ will be assigned to the vertex $y$ ($v_3,v_4,\dots,v_s$ will be assigned to the leaves adjacent to $x$). Thus, using the hyperedges $h_1,h_2,\dots,h_s$ we can embed the vertex $x$ and all its neighbors in $T$ using at most $s-1$ hyperedges from $S$ and at most $s-1$ vertices from $C$ ($v$ and $w$ are not in $C$). There are at least $(k-1)-(s-1)=k-s$ remaining vertices in $C$. Each of these is contained in at least $k-s$ unused hyperedges of $S$. Thus, the remaining $k-s$ vertices of the tree can be mapped to distinct vertices from $C$, and the remaining edges of the tree may be assigned to distinct unused hyperedges of $S$. \end{proof} \begin{remark}\label{disjoint} Note that by Claim \ref{core} and Claim \ref{Y}, if $\mathcal{H}$ is a multi-hypergraph with uniformity $r \ge k+1$ that does not contain a Berge copy of a tree on $k$ edges which is not a star, then $(k-1)$-clusters of $\mathcal{H}$ are edge-disjoint. \end{remark} \begin{lemma}\label{cluster} Let $k$ be a positive integer and let $T$ be a $k$-edge tree which is not a star. Let $\mathcal{H}$ be a multi-hypergraph not necessarily uniform, not containing a Berge copy of $T$, and assume that each hyperedge in $\mathcal{H}$ has size at least $k+1$. If there exists a reduced sub-hypergraph $\mathcal{H}' = (V',E')$ of $\mathcal{H}$ such that $d_{\mathcal{H}'}(v) \geq k-1$ for each $v \in V'$ and $\abs{h} \geq k-1$ for each $h\in E'$, then $\mathcal{H}'$ contains a $(k-1)$-cluster. Note that if $S$ is a $(k-1)$-cluster in $\mathcal{H}'$, then the correspondent edges of $S$ in $\mathcal{H}$ are a $(k-1)$-cluster. \end{lemma} \begin{proof} Let $h_2 \in E'$. We will show that every vertex in $h_2$ is contained in the same set of hyperedges in $E'$. Let $v_1,v_2 \in h_2$, and suppose by contradiction that there exists a hyperedge $h_3$ incident to $v_2$ and not to $v_1$. Enumerate the vertices of $T$ by $x_0,x_1,\dots,x_k$ in such a way that the graph induced by the vertices $x_0,x_1,\dots,x_i$ is connected for all $i$, $x_0$ is a leaf of $T$ and $x_0,x_1,x_2,x_3$ is a path of length 3 (such a path exists since $T$ is not a star). For each $i = 1,2,\dots,k$, the vertex $x_i$ is adjacent to exactly one vertex of smaller index, call the edge using $x_i$ and the vertex of smaller index $e_i$. We can embed $T$ into $\mathcal{H}$ in the following way. First assign $v_1$ to $x_1$, $h_2$ to $\{x_1,x_2\}$, $v_2$ to $x_2$, $h_3$ to $\{x_2,x_3\}$ and any vertex in $v_3 \in h_3\setminus\{v_1,v_2\}$ to $x_3$. For $i=4,\dots,k$, suppose $e_i = \{x_i,x_{j_i}\}$. Pick any hyperedge $h_i \in E'$ incident to $v_{j_i}$ and distinct from $h_2,h_3,\dots,h_{i-1}$ (such hyperedges exist since $d_{\mathcal{H}'}(v_{j_i}) \geq k-1$) and assign it to $e_i$. If $i\leq k-1$, pick any $v_i \in h_i\setminus\{v_1,v_2,\dots,v_{i-1}\}$, and if $i= k$, then let $\tilde{h}_k$ be the correspondent hyperedge of $h_k$ in $\mathcal{H}$. As $\tilde{h}_k$ has size bigger than $k$, let $v_{k}$ be any vertex in $\tilde{h}_k\setminus\{v_1,v_2,\dots,v_{k-1}\}$. This vertex $v_k$ is assigned to $x_k$. Finally, since $v_1$ is incident with at least $k-1$ hyperedges distinct to $h_3$, there is a hyperedge $h_1$ incident to $v_1$ and distinct from the already chosen hyperedges. Let $\tilde{h}_1$ be the correspondent hyperedge of $h_1$. Take any vertex in $\tilde{h}_1$ which has not been assigned yet and assign it to $x_0$. Thus, by replacing the edge $h_i$ with their correspondent hyperedges, we have found a Berge copy of $T$ in $\mathcal{H}$, a contradiction. It follows that for any $v_1,v_2 \in h_2$, we have that $v_1$ and $v_2$ must be incident with the same set of hyperedges in $\mathcal{H}'$ (by assumption at least $k-1$), and so $\mathcal{H}'$ contains a $(k-1)$-cluster. \end{proof} Lemma \ref{cluster} says that if $\mathcal{H}$ does not contain a Berge copy of a tree and we are able to find a large enough reduced sub-hypergraph, then $\mathcal{H}$ must have at least one $(k-1)$-cluster. The main idea of the proofs of the main results is to show that if $\mathcal{H}$ has too many hyperedges and no Berge copy of a tree, then after removing all $(k-1)$-clusters, we would still be able to find a large enough reduced sub-hypergraph. This would imply that there is still another $(k-1)$-cluster in $\mathcal{H}$, a contradiction. \begin{proof}[Proof of Theorem \ref{Multi_tree_theorem}] Let $T$ be a $k$-edge tree, which is not a star. Suppose that $\mathcal{H}$ is an $n$-vertex $r$-uniform hypergraph with at least $\frac{n(k-1)}{r}$ hyperedges such that $\mathcal{H}$ does not contain a Berge copy $T$, and let $G$ be the incidence bipartite graph of $\mathcal{H}$, i.e., the bipartite graph with color classes $V(\mathcal{H})$ and $E(\mathcal{H})$ where $v\in V(\mathcal{H})$ is adjacent to $h \in E(\mathcal{H})$ if and only if $v\in h$. Since $\displaystyle e(\mathcal{H}) \ge \frac{n(k-1)}{r}$, we have $\displaystyle \frac{e(G)}{v(G)} = \frac{re(\mathcal{H})}{n + e(\mathcal{H})} = \frac{r}{\frac{n}{e(\mathcal{H})}+1} \ge \frac{r}{\frac{r}{k-1}+1} = \frac{r(k-1)}{r+k-1},$ and note that \begin{displaymath} \frac{r(k-1)}{r+k-1} \geq k - 2 \Leftrightarrow r(k-1) \geq (k - 2)(r+k-1) = r(k-1) + (k-2)(k-1) - r \Leftrightarrow r \geq (k-2)(k-1). \end{displaymath} Hence $d(G) = \frac{2e(G)}{v(G)}\geq 2\left(\frac{r(k-1)}{r+k-1}\right) \geq k-2$, since $r \geq (k-2)(k-1).$ Suppose $\mathcal{H}$ has $t$ distinct $(k-1)$-clusters $S_1,S_2,\dots,S_t$ (recall that by Remark~\ref{disjoint} $(k-1)$-clusters are edge-disjoint). For each $S_i$, let $X_i$ be the set of vertices which are incident only with hyperedges of $S_i$, let $X = \bigcup_{i=1}^t X_i$ and let $Y$ be the set of vertices that are not in $X$ but are incident with at least one of the $(k-1)$-clusters. Let $G_1$ be the induced subgraph of $G$ obtained by removing $X$, $Y$ and all $(k-1)$-cluster hyperedges from the vertex set of $G$. We will show that $d(G_1) \geq d(G)$ (provided $G_1$ is not the empty graph). The number of edges removed in $G$ is $\sum_{v\in X} d_{\mathcal{H}}(v) + \sum_{v\in Y} d_{\mathcal{H}}(v)$. Since the degree of each $v\in X$ is at most $k-1$, we have that $\displaystyle \left(\sum_{v\in X} d_{\mathcal{H}}(v)\right) \leq \abs{X}(k-1)$. Also $X$ is only incident with the $(k-1)$-cluster hyperedges, thus we also have the bound $\displaystyle \left(\sum_{v\in X}d_{\mathcal{H}}(v)\right) \leq tr(k-1)$, and since the degree of each $v\in Y$ is at most $\frac{k-1}{2}$ (Claim~\ref{Y}), we have that $\displaystyle \left(\sum_{v\in Y} d_{\mathcal{H}}(v) \right) \leq \frac{(k-1)\abs{Y}}{2}$. Therefore \[\left(\sum_{v\in X}d_{\mathcal{H}}(v) + \sum_{v\in Y} d_{\mathcal{H}}(v) \right)(r+k-1)\] \[= \left(\sum_{v\in X}d_{\mathcal{H}}(v)\right)r + \left(\sum_{v\in X}d_{\mathcal{H}}(v)\right)(k-1) + \left(\sum_{v\in Y} d_{\mathcal{H}}(v) \right)(r+k-1)\] \[\leq \abs{X}r(k-1) + tr(k-1)^2 + \frac{(k-1)\abs{Y}}{2}(r+k-1) \leq r(k-1)(\abs{X} + t(k-1) + \abs{Y}),\] where in the last inequality we used $\frac{r+k-1}{2} < r$. Thus, equality can hold only if $Y = \emptyset$. Rearranging we have \begin{equation} \label{g} \left(\sum_{v\in X}d_{\mathcal{H}}(v) + \sum_{v\in Y} d_{\mathcal{H}}(v) \right)\le \frac{r(k-1)}{r+k-1}\left(\abs{X} + t(k-1) + \abs{Y}\right). \end{equation} The left-hand side of \eqref{g} is the number of removed edges, and the right-hand side is $d(G)/2$ times the number of removed vertices. Therefore, by Lemma~\ref{average}, if $G_1$ is non-empty, we have that \[d(G_1) \geq d(G) \geq 2(k-2).\] Hence, by Lemma~\ref{folklore} there is a subgraph $G_2$ of $G_1$ with minimum degree at least $k-1$. Suppose that $G_2$ has bipartite classes $A \subseteq V(\mathcal{H})$ and $B \subseteq E(\mathcal{H})$, and define $\mathcal{H}'$ by taking the vertex set $V' = A$ and $E' = \{h\cap V': h \in B\}$. The condition on the minimum degree of $G_2$ implies that every vertex of $\mathcal{H}'$ has degree at least $k-1$ and every hyperedge of $\mathcal{H}'$ has size at least $k-1$. Then by Lemma \ref{cluster}, $\mathcal{H}'$ contains a $(k-1)$-cluster, but this $(k-1)$-cluster corresponds to a $(k-1)$-cluster in $\mathcal{H}$ contradicting the fact that we removed every $(k-1)$-cluster from $\mathcal{H}$. So $\mathcal{H}$ must contain a Berge copy of $T$, unless $G_1$ is empty. Note that, for $G_1$ to be empty it is necessary that $d(G) = 2\frac{r(k-1)}{r+k-1}$ and that equality holds in the inequality~\eqref{g}. This can be possible only if $Y = \emptyset$ and \[\abs{X} = \frac{1}{k-1}\sum_{v\in X} d_{\mathcal{H}}(v) = tr.\] Since every $(k-1)$-cluster contains at least $r$ vertices, we have $\abs{X_i}\geq r$, and so each $X_i$ must have size exactly $r$, hence $\mathcal{H}$ is the disjoint union of $t$ hyperedges each with multiplicity $k-1$. Therefore the number of vertices would be a multiple of $r$ and $e(\mathcal{H}) = \frac{n(k-1)}{r}$. Hence if $e(\mathcal{H}) \geq \frac{n(k-1)}{r}$, then $\mathcal{H}$ must contain a Berge copy of $T$, or $r|n$ and $\mathcal{H}$ is the disjoint union of $\frac{n}{r}$ hyperedges each with multiplicity $k-1$. \end{proof} \begin{remark} For $r=(k-2)(k-1)$, the proof above also shows that if $e(\mathcal{H}) > \frac{n(k-1)}{r},$ then $\mathcal{H}$ must contain a Berge copy of $T$. However, the extremal construction does not follow from that proof. \end{remark} \begin{proof}[Proof of Theorem \ref{Not_Star_Theorem}] Let $T$ be a $k$-edge tree which is not a star. We may assume $k>3$, since otherwise $T$ is a path, and we already know the result for paths. Let $\mathcal{H}$ be an $n$-vertex hypergraph with at least $\frac{n(k-1)}{r+1}$ hyperedges and $r \ge k(k-2)$. We will proceed by induction on the number of vertices $n$; the base cases $n \le r+1$ are trivial. If there is a set $U$ of size $r+1$ which is incident with at most $k-1$ hyperedges, put $V'=V\setminus U$ and let $n' = |V'|=n-r-1$. By induction, $\mathcal{H}'$ the hypergraph induced by $V'$, has at most $\frac{n'(k-1)}{r+1}$ hyperedges and equality holds if $r+1|n'$ and $\mathcal{H}'$ is the disjoint union of cliques, unless $T$ is the balanced double star, then it may contain a $(k-1,\frac{k-1}{2})$-regular two-sided hypergraph as described in the statement of the theorem. Note that if one of the hyperedges incident with $U$ is incident with a vertex $v$, $v \in V'$, then $v$ has degree at least $\floor{\frac{k+1}{2}}$, and $v$ is in a $(k-1)$-cluster of $\mathcal{H}'$, thus we have a Berge copy of $T$ from Claim~\ref{Y}. Hence, the $k-1$ hyperedges incident with $U$ are contained in the vertex set $U$ and $\mathcal{H}$ has the desired structure. Similarly to the proof of Theorem~\ref{Multi_tree_theorem}, we have that \[\displaystyle\frac{e(G)}{v(G)} = \displaystyle\frac{re(\mathcal{H})}{n+r(\mathcal{H})} = \frac{r}{\frac{n}{e(\mathcal{H})}+1} \ge \frac{r}{\frac{r+1}{k-1} + 1} = \frac{r(k-1)}{r+k},\] and note that \begin{displaymath} \frac{r(k-1)}{r+k} \geq k - 2 \Leftrightarrow r(k-1) \geq (k - 2)(r+k) = r(k-1) + (k-2)k - r \Leftrightarrow r \geq k(k-2). \end{displaymath} Hence $d(G) = \frac{2e(G)}{v(G)}\geq 2\left(\frac{r(k-1)}{r+k-1}\right) \geq k-2$, since $r \geq (k-2)(k-1).$ Suppose that $\mathcal{H}$ has $t$ distinct $(k-1)$-clusters $S_1,S_2,\dots, S_t$. Define the sets $X_1,\dots,X_t,X$ and $Y$ as in the proof of Theorem~\ref{Multi_tree_theorem}. We are going to remove all vertices and hyperedges of these $(k-1)$-clusters as in the previous proof, and we will denote the incidence bipartite graph of $\mathcal{H}$ by $G$. By $G_1$ we will denote the incidence bipartite graph of the hypergraph $\mathcal{H}'$, obtained from $\mathcal{H}$ after removing the $(k-1)$-clusters. If $\abs{X_i} \geq r+1$ for some $i$, then by taking $U\subseteq X_i$ of size $r+1$, we would have that $U$ is incident with at most $k-1$ hyperedges, and we would be done by induction. Hence we assume that $\abs{X_i}\leq r$. For each $i$, with $\abs{X_i} = r$, we have \[\sum_{v\in X_i} d_{\mathcal{H}}(v) \leq (r-1)(k-1) +1 = \abs{X_i}(k-1)-(k-2),\] since any hyperedge is incident with at most $r-1$ vertices from $X_i$, with the possible exception of at most one hyperedge ($X_i$, if $X_i \in E(\mathcal{H}))$. For each $i$, with $\abs{X_i} \leq r-1$, we have \[\sum_{v\in X_i} d_{\mathcal{H}}(v) \leq \abs{X_i}(k-1) \leq (r-1)(k-1).\] Let $a$ be the number of $X_i$, $1 \le i \le t$, with the size $r$. Then we have the following inequalities \begin{equation} \label{aaaa} \displaystyle\sum_{v\in X}d_{\mathcal{H}}(v) = \sum_{\substack{{\abs{X_i}= r} \\ v\in X_i}} d_{\mathcal{H}}(v) + \sum_{\substack{{\abs{X_i} < r} \\ v\in X_i}} d_{\mathcal{H}}(v) \leq t(r-1)(k-1) + a, \end{equation} and \begin{equation} \label{bbbb} \sum_{v\in X}d_{\mathcal{H}}(v) \leq \sum_{\substack{{\abs{X_i}= r} \\ v\in X_i}} (\abs{X_i})(k-1)-(k-2)) + \sum_{\substack{{\abs{X_i}< r} \\ v\in X_i}}\abs{X_i}(k-1) =\abs{X}(k-1) - a(k-2). \end{equation} We also have \begin{equation} \label{zyx} tr(k-1) \leq \sum_{v\in X}d_{\mathcal{H}}(v) + \sum_{v \in Y} d_{\mathcal{H}}(v) \leq t(r-1)(k-1) + a + \frac{k-1}{2}\abs{Y}, \end{equation} where in the first inequality follows from the fact the set the edges of $t(k-1)$ hyperedges of the $t$ cluster are incident only with the set $X\cup Y$, and the second inequality follows directly from Claim~\ref{Y} together with the fact that $d_{\mathcal{H}}(v) \leq k-1$ by definition. Rearranging \eqref{zyx} yields \begin{equation} \label{gh} \displaystyle t(k-1) \leq a + \frac{\abs{Y}(k-1)}{2}. \end{equation} The following three bounds come from multiplying inequality \eqref{bbbb} and \eqref{aaaa} by $r$ and $k$, respectively, and the bound from Claim \ref{Y} by $k+r$. \begin{equation} \label{ab} \left(\sum_{v\in X}d_{\mathcal{H}}(v)\right)r \leq \abs{X}r(k-1) - ar(k-2). \end{equation} \begin{equation} \label{cd} \left(\sum_{v\in X}d_{\mathcal{H}}(v)\right)k \leq t(r-1)k(k-1) + ak. \end{equation} \begin{equation} \label{ef} \left(\sum_{v\in Y} d_{\mathcal{H}}(v)\right)(k+r) \leq \frac{\abs{Y}(k-1)}{2}(k+r). \end{equation} Now we bound the number of deleted hyperedges times $r+k$. From \eqref{ab}, \eqref{cd}, \eqref{ef} and then \eqref{gh}, it follows that $$\left(\sum_{v\in X}d_{\mathcal{H}}(v) + \sum_{v\in Y} d_{\mathcal{H}}(v)\right)(k+r) \leq \abs{X}r(k-1) - ar(k-2) + t(r-1)k(k-1) + ak + \frac{\abs{Y}(k-1)}{2}(k+r)$$ $$= \abs{X}r(k-1) - ar(k-2) + tr(k-1)^2 + t(k-1)(r-k) + ak + \frac{\abs{Y}(k-1)}{2}(k+r)$$ $$\leq \abs{X}r(k-1) - ar(k-2) + tr(k-1)^2 + (r-k)\left(a + \frac{\abs{Y}(k-1)}{2}\right) + ak + \frac{\abs{Y}(k-1)}{2}(k+r)$$ $$= \abs{X}r(k-1) - ar(k-3) + tr(k-1)^2 + \abs{Y}(k-1)r = r(k-1)(\abs{X}+\abs{Y}+t(k-1)) - ar(k-3)$$ $$\leq r(k-1)(\abs{X}+\abs{Y}+t(k-1)).$$ Rearranging we have \begin{equation} \label{f} \left(\sum_{v\in X}d_{\mathcal{H}}(v) + \sum_{v\in Y} d_{\mathcal{H}}(v) \right)\le \frac{r(k-1)}{r+k}\left(\abs{X} + t(k-1) + \abs{Y}\right). \end{equation} The left-hand side of \eqref{f} is the number of removed edges, and the right-hand side of \eqref{f} is $d(G)/2$ times the number of removed vertices. Hence, by Lemma~\ref{average} if $G_1$ is nonempty, we have that \[d(G_1) \geq d(G) \geq 2(k-2).\] Thus, by Lemma~\ref{folklore} we can find a subgraph $G_2$ of $G_1$ with minimum degree at least $k-1$. Suppose that $G_2$ has bipartite classes $A \subseteq V$ and $B \subseteq E(\mathcal{H})$, define $\mathcal{H}'$ by taking the vertex set $V' = A$ and hyperedge set $E' = \{h\cap V': h \in B\}$. The condition on the minimum degree of $G_2$ implies that every vertex of $\mathcal{H}$ has minimum degree at least $k-1$, and every hyperedge of $\mathcal{H}'$ has size at least $k-1$. Then by Lemma~\ref{cluster}, $\mathcal{H}'$ contains a $(k-1)$-cluster, which contradicts that we have removed all $(k-1)$-clusters in $\mathcal{H}$. For $G_1$ to be empty it is necessary that $d(G) = 2\frac{r(k-1)}{r+k}$, and for \eqref{f} to hold with equality, we must have that $e(\mathcal{H}) = \frac{n(k-1)}{r+1}$. To obtain equality in $\eqref{f}$, it is necessary that $a=0$ (since $k>3$) and that every hyperedge contains one of the $X_i$. It then follows that $\abs{X} = t(r-1)$, and by \eqref{gh}, $\abs{Y} = 2t$. By \eqref{ef}, for every $v\in Y$, we have $d_{\mathcal{H}}(v) = \frac{k-1}{2}$, so $n = t(r+1)$. Then $\mathcal{H}$ is a disjoint union of sets of $r+1$ vertices with $k-1$ hyperedges, and a hypergraph constructed from the classes $A=\{X_1,X_2\dots,X_t\}$ and $B = Y$, where $\{y,X_i\}$ is an edge if $X_i \cup \{y\}$ is a hyperedge of $\mathcal{H}$. Note that $2t = 2\abs{A} = \abs{B}$, the degree of every vertex in $B$ is $\frac{k-1}{2}$ and every vertex of $A$ has degree $k-1$; that is, $\mathcal{H}$ is a $(k-1,\frac{k-1}{2})$-regular two-sided hypergraph. However, this is only possible if $k$ is odd, and it is simple to check that this construction contains a Berge copy of every $k$-edge tree which is not a balanced $k$-edge double star or the $k$-edge star. \end{proof} \section*{Acknowledgments} The research of the first three authors was partially supported by the National Research, Development and Innovation Office NKFIH, grants K116769, K117879 and K126853. The research of the second author is partially supported by Shota Rustaveli National Science Foundation of Georgia SRNSFG, grant number DI-18-118. The research of the third author is supported by the Institute for Basic Science (IBS-R029-C1).
{ "timestamp": "2020-04-16T02:02:08", "yymm": "1904", "arxiv_id": "1904.06728", "language": "en", "url": "https://arxiv.org/abs/1904.06728" }
\section*{\bf Introduction} A quandle is a set with two binary operations $(Q,\triangleleft,\triangleright)$, satisfying following identities. \begin{enumerate} \item $a\triangleleft (b\triangleleft c)=(a\triangleleft b) \triangleleft (a \triangleleft c) $ \item $(a\triangleright b)\triangleright c=(a\triangleright c)\triangleright (a\triangleright c)$ \item $a \triangleleft (b \triangleright a)=b=(a \triangleleft b) \triangleright a$ \item $a\triangleleft a=a=a\triangleright a$ \end{enumerate} A group $G$ gives an example of a quandle if we define the operations as follows: \[ h\triangleleft g = g^{h^{-1}}, \hspace{1cm} g \triangleright h = g^h, \] where $a^b=b^{-1}ab.$ This quandle is denoted by ${\sf Conj}(G).$ The definition is motivated by knot theory (see \cite{Joyce} for details). As for any type of algebraic structure, in quandle theory there is a notion of a free quandle. V.~Bardakov, M.~Singh and M.~Singh in \cite[Problem 6.12]{Bardakov} raised the question about an analogue of Nielsen–Schreier theorem for quandles: is it true that any subquandle of a free quandle is free. This note is devoted to an affirmative answer on this question: \ \noindent {\bf Theorem}. {\it A subquandle of a free quandle is free.} \ Moreover, for a subquandle $Q$ of a free quandle, we give an explicit construction of a basis $S(Q)$ in $Q.$ The main tool for us is the following description of free quandles. \ \noindent {\bf Theorem}~{\cite[Th.4.1]{Joyce}}. {\it Let $X$ be a set and let $F(X)$ be the free group generated by $X$. Denote by $FQ(X)$ the union of conjugation classes of elements of $X$: \[FQ(X)= \bigcup_{x\in X} x^{F(X)}.\] Consider $FQ(X)$ as a subquandle of ${\sf Conj}(F(X)).$ Then $FQ(X)$ is a free quandle generated by $X$.} \section*{\bf Proof} A subset of a group is called {\it independent} if it is a basis of a free subgroup. \begin{Lemma}\label{lemma_if_S_ind} Let $S$ be a subset of $FQ(X)$ which is independent in $F(X).$ Then the subquandle of $FQ(X)$ generated by $S$ is free. \end{Lemma} \begin{proof} Consider the group homomorphism $F(S)\to F(X)$ induced by the embedding $S\hookrightarrow F(X).$ Since $S$ is independent, the homomorphism is injective. Then the restriction to the free quandles $FQ(S)\to FQ(X)$ is also injective. The subquandle generated by $S$ in $FQ(X)$ is equal to the image of the injective quandle morphism $FQ(S)\to FQ(X),$ and hence, it is free. \end{proof} Fix a set $X$ and denote $F:=F(X).$ We threat an element $w$ of $F$ as a reduced word and denote by $w_i$ its $i$th factor: $$w=w_1w_2\dots w_n, \hspace{1cm} w_i\in X\cup X^{-1},\ \ w_i\ne w_{i-1}^{-1}.$$ The length of the word is denoted by $|w|:=n.$ Let $Q$ be a subquandle of $FQ(X)$. We are going to construct a subset $S(Q)$ in $Q$ and prove that it is a basis of $Q.$ First, for an element $x\in X$ we consider the following subset: \[ P_x:=\{ w\in F \mid x^w\in Q \wedge \ (w=1 \vee w_1\ne x^{\pm 1} ) \}. \] It consists of all reduced words $w$ such that $x^w\in Q$ and whose first factor, if it exists, differs from $x$ and $x^{-1}.$ Then we define the set $T_x$ as follows \[T_x = \left\{ w \in P_x \mid \forall q\in Q \ \forall \varepsilon\in \{1,-1\} \ \ \ |wq^\varepsilon| > |w| \right\}.\] In some sense, $T_x$ consists of ``non shrinkable'' elements of $P_x.$ Finally, we consider the set \[ S(Q) := \bigcup_{x \in X}x^{T_x},\] that consists of all elements of the form $x^w,$ where $w\in T_x.$ It is easy to see that $S(Q)\subseteq Q.$ Our aim is to prove that $Q$ is a free quandle generated by $S(Q).$ \begin{Lemma}\label{lemma_SQ_gen} The set $S(Q)$ generates the quandle $Q.$ \end{Lemma} \begin{proof} Set $S:=S(Q)$ and denote by $\langle S \rangle$ the subquandle of $Q$ generated by $S.$ We need to prove that $Q=\langle S \rangle.$ Take an element $q\in Q.$ By construction, there is an element $x\in X$ and $w\in P_x$ such that $q=x^w.$ So we need to prove that $x^w\in Q \Rightarrow x^w\in \langle S\rangle.$ In order to prove this by induction, we reformulate this in the following way: for any $n\geq 0,$ if $|w|= n, x\in X$ and $x^w\in Q,$ then $ x^w \in \langle S\rangle.$ Prove this by induction on $n.$ Prove the base case. Assume that $n=0.$ Then $w=1.$ Hence, $1\in P_x$ and $1\in T_x.$ So $x\in x^{T_x}\subseteq S.$ Therefore $x\in \langle S \rangle.$ Prove the induction step. Assume that $|w|=n, x\in X$ and $x^w\in Q.$ If $w\in T_x,$ then the statement is obvious, because $x^w\in S.$ Then we can assume that $w\notin T_x.$ In this case there exists $q\in Q$ such that $|wq^\varepsilon|\leq |w|$ for some $\varepsilon\in \{1,-1 \}.$ Note that elements from $FQ(X)$ have odd lengths, and hence, $|wq^\varepsilon|$ and $|w|$ have different parity. Then they can't be equal. Therefore $|wq^\varepsilon|<n.$ Then we set $w':=wq^\varepsilon.$ Then $x^{w'}=x^w\triangleright^{\varepsilon} q,$ where $\triangleright=\triangleright$ and $\triangleright^{-1}=\triangleleft.$ We obtain $x^{w'}\in Q.$ By induction hypothesis, we have that $x^{w'}\in \langle S \rangle.$ Since $q\in Q,$ there exists $y\in X$ and $u\in P_y$ such that $q=y^u.$ Note that the first letter of $u$ is not equal to $y,$ and hence, there are no cancelations in the product $u^{-1}y^\varepsilon u.$ Since the length of $w'=wu^{-1}y^\varepsilon u$ is less than the length of $w,$ we obtain that $u^{-1}y^\varepsilon$ completely cancels in the product $wu^{-1}y^\varepsilon.$ Therefore, $|u|<|w|=n.$ By the induction hypothesis we obtain $q=y^u\in \langle S\rangle.$ Combining this with the fact that $x^{w'}\in \langle S \rangle$ and the equation $x^w=x^{w'} \triangleright^{\varepsilon} q,$ we obtain $x^w\in \langle S \rangle.$ \end{proof} \begin{Lemma}\label{lemma_SQ_indep} The set $S(Q)$ is independent in $F(X).$ \end{Lemma} \begin{proof} Following M. Hall \cite[\S 7.2]{Hall} we say that a subset $Y$ of $F(X)$ such that $Y\cap Y^{-1}=\emptyset$ {\it posses significant factors} if there is a collection of indexes $\{i(w)\}_{w\in Y\cup Y^{-1}}$ such that $0\leq i(w)\leq |w|,$ $i(w^{-1})=|w|+1-i(w)$ and in each product $wv$ for $w,v\in Y\cup Y^{-1}, w\ne v^{-1}$ the cancelation doesn't reach the factors $w_{i(w)}$ and $v_{i(v)}.$ If a subset possess significant factors, then it is independent \cite[Th. 7.2.2.]{Hall}. So it suffices to show that $S(Q)$ possess significant factors. For each $x^w\in S(Q)$ we chose the central factor $x$ as a significant factor $i(x^w):=|w|+1.$ Then we only need to prove that the central factors $x^\varepsilon,y^\delta$ of the words $x^{\varepsilon w} $ and $y^{\delta v}$ do not cancel in the product $x^{\varepsilon w}y^{ \delta v}$ for $x,y\in X, w\in T_x,$ $ v\in T_y, \varepsilon,\delta\in \{1,-1\},$ and $x^{\varepsilon w}\ne y^{-\delta v}.$ Assume the contrary, that one of the factors $x^\varepsilon,y^\delta $ cancels in the product $w^{-1}x^\varepsilon wv^{-1} y^\delta v$. Then one of the following holds \begin{enumerate} \item $v^{-1}$ can be presented as a product without cancelations $v^{-1}=w^{-1}x^{-\varepsilon}u$ for some $u;$ \item $w$ can be presented as a product without cancelations $w=uy^{-\delta}v$ for some $u.$ \end{enumerate} In the first case we have $vx^{-\varepsilon w}=u^{-1}w,$ and hence $|vx^{-\varepsilon w}|<|v|,$ which contradicts to the fact that $v\in T_y.$ In the second case we have $wy^{-\delta v}=uv,$ and hence $|wy^{-\delta v}|<|w|,$ which contradicts to the fact that $ w\in T_x.$ \end{proof} \begin{proof}[Proof of the theorem] Let $Q$ be a subquandle of $FQ(X).$ By Lemma \ref{lemma_SQ_gen} $Q$ is generated by the set $S(Q),$ which is independent by Lemma \ref{lemma_SQ_indep}. Then by Lemma \ref{lemma_if_S_ind}, we obtain that $Q$ is free. \end{proof}
{ "timestamp": "2019-04-16T02:11:44", "yymm": "1904", "arxiv_id": "1904.06571", "language": "en", "url": "https://arxiv.org/abs/1904.06571" }
\section{Introduction} \label{sec:intro} Stars form from the gravitational collapse of dense cores in dusty molecular clouds. As the obscuring dust is used up or otherwise dispersed, the progression from embedded protostar to pre-main-sequence star can be followed through the shift in the peak of the spectral energy distribution (SED) from the far-infrared to optical. The spectral slope from 2 to $25\,\mu$m provides the most commonly used classification, beginning with rising Class I, through Flat Spectrum, to decreasing Class II, and finally Class III with weak or no infrared excesses. The star has almost reached its final mass by the Class I phase, within $\sim 10^5$ years after the onset of collapse \citep{{2018A&A...618A.158K}}, but the remaining mass in the accompanying circumstellar disk and how that evolves through the Classes as planets form over the next several Myr is not known. The material content in these small, cold, planet-forming disks is best measured using high-resolution millimeter wavelength observations. The Atacama Large Millimeter Array (ALMA) has the imaging speed and sensitivity to carry out complete surveys of disks in nearby star-forming regions, providing the essential millimeter wavelength counterpart to previous infrared surveys. However, the majority of ALMA disk measurements have been of Class II sources since they are much more numerous than the short-lived Class I phase and very little material remains in the later Class III phase \citep{2015A&A...583A..66H}. To follow disk mass evolution requires a large survey of a star-forming region with the right age to contain a mix of many protostars in all evolutionary classes. The Ophiuchus region is ideally suited on account of its proximity, youth, and size ($\sim 140$\,pc, $\sim 1$\,Myr, and $\sim 300$ protostars). Here, we discuss the Ophiuchus DIsk Survey Employing ALMA (ODISEA) project, a complete survey of the millimeter continuum and CO line emission from all the protostars identified in the \emph{Spitzer} ``Cores to Disks'' Legacy project \citep{2009ApJS..181..321E}. The survey and first results are described in \citet[hereafter Paper I]{2019MNRAS.482..698C}. In this Letter, we describe the completion of the initial survey and its vetting with the \emph{Gaia} mission. We calculate disk masses and compare their distributions across protostellar classes, revealing their evolution with more clarity than before. We find that disk masses decline from Class I to Class II, smoothly through the intermediate Flat Spectrum stage. However, this seemingly simple picture of monotonic evolution is complexified by a comparison with other regions, which shows that Ophiuchus Class II disks have slightly, but significantly, lower masses than the slightly older Lupus region. We conclude with a discussion of these results and their implications for planet formation. \section{Observations} \label{sec:obs} The full set of 289 sources in the ODISEA sample was observed in two samples, A and B, with 147 and 142 sources each. Paper I describes the full sample selection and the observations of sample A, consisting of Class I, Flat Spectrum, and bright ($K \leq 10$\,mag) Class II sources. Here, we augment those data with the observations of sample B, consisting of the fainter ($K > 10$\,mag) Class II and Class III sources in the same Cycle 4 ALMA program 2016.1.00545.S. The observations of sample B were carried out in ALMA Band 6 with 40 antennas in the C40-3 array configuration (15 to 500\,m baselines) on 2018 May 2$^{\rm nd}$, and August 20$^{\rm th}$ and 21$^{\rm st}$. The precipitable water vapor was 2.15, 0.95, and 1.43\,mm, respectively with corresponding average system temperatures of 122, 98, and 109\,K. The correlator was configured in the same way as for sample A, with a total continuum bandwidth of 7.3 GHz centered at 225.4\,GHz ($\lambda = 1.33$\,mm). There were also three higher spectral resolution windows centered on the $J=2-1$ transition of CO, $^{13}$CO, and C$^{18}$O. These lines constrain the disk gas content and will be discussed in a future paper of the ODISEA series. The visibilities were calibrated using the standard data pipeline scripts using version 5.3.0 of the CASA software package. The gain and bandpass calibrators were J1625-2527, J1517-2422, and J1924-2914. The flux scale was referenced to J1733-1304 and J1517-2422 and has an uncertainty of 10\%. Each of the 142 sources was observed for a total of 54 seconds. Continuum images were created using the task {\tt tclean} and inspected for multiplicity. The beam size was similar for all sources, $\sim 0\farcs 98 \times 0\farcs 74$ at a position angle of $\sim 88^\circ$, about four times larger than for sample A due to the more compact configuration. The expectation (which was realized) was that the disk fluxes in this sample would be lower due to their more evolved state and the lower stellar masses, and the lower resolution was chosen to ensure high mass sensitivity independent of disk size. All but three sources were indeed unresolved and only one millimeter binary was found. Photometry was carried out for the rest of the sample by fitting a point source to the visibilities using the task {\tt uvmodelfit}. Total fluxes for the binary and resolved sources were measured using aperture photometry. The distance to a source, $d$, used to be a considerable source of error in determining disk and stellar properties but is now negligible thanks to the \emph{Gaia} mission. Out of 289 sources in the original ODISEA sample, 169 have parallaxes ($\pi$) in the Data Release 2 catalog \citep{2018A&A...616A...1G}. 23 sources (4 Class II, 19 Class III) have $\pi < 2.5$\,mas which places them at a distance greater than 400\,pc. We considered these to be background objects, probably red giants, and removed them from subsequent analysis (all are undetected in our ALMA observations). As in \citet{2018A&A...618L...3M}, we set $d = 1/\pi$ for those sources with $\pi/\sigma_\pi > 10$ since the inversion bias is very small in these cases \citep{2015PASP..127..994B}. For the 14 sources with larger fractional errors and the 106 with no measurement, we used a mean distance $\bar d = 139.4$\,pc. Finally, we removed a misclassified Class III source, J162119.2-234229, that was detected with a flux density $0.79\pm 0.13$\,mJy, but is actually a Be star (HD\,147196). The ALMA measurements and \emph{Gaia} distances for the final sample of 265 sources (279 disks after allowing for 12 binaries and one triple system) is listed in Table 1. \begin{deluxetable*}{rcrrrcc}[ht] \tablecaption{Disk distances and flux densities\tablenotemark{a} \label{tab:fluxes}} \tablenum{1} \tablewidth{0pt} \tablehead{ \colhead{Spitzer ID} & \colhead{Class} & \colhead{$d$} & \colhead{$F_{225GHz}$} & \colhead{$\sigma_{225GHz}$} & \colhead{$\alpha_{2000}$} & \colhead{$\delta_{2000}$} \\ \colhead{(SSTc2d +)} & \colhead{} & \colhead{(pc)} & \colhead{(mJy)} & \colhead{(mJy)} & \colhead{($^\circ$)} & \colhead{($^\circ$)} \\[-5mm] } \startdata J162034.2-242613 & II & 139.40 & 0.22 & 0.11 & ... & ... \\ J162118.5-225458 & II & 138.97 & 3.26 & 0.13 & 245.32696 & -22.91624 \\ J162131.9-230140 & II & 137.00 & 4.60 & 0.16 & 245.38302 & -23.02800 \\ J162138.7-225328 & I & 139.40 & 0.17 & 0.20 & ... & ... \\ J162142.0-231343 & II & 138.94 & 1.75 & 0.11 & 245.42493 & -23.22884 \\ \enddata \tablenotetext{a}{Only the first 5 lines are shown here. The full table of 279 disks is available in machine-readable form in the online journal.} \end{deluxetable*} \section{Disk Mass Distributions} \label{sec:mass} The dust masses of the disks are calculated in the simplest way, under the assumption of optically thin emission throughout, with a uniform temperature, $T_{\rm dust} = 20$\,K, and opacity coefficient, $\kappa_\nu = (\nu/100\,{\rm GHz})\,{\rm cm^2~g^{-1}}$. For a disk with flux density $F_\nu$, this gives the standard formula, \begin{equation} M_{\rm dust} = \frac{F_\nu d^2}{\kappa_\nu B_\nu(T_{\rm dust})} = 0.592\,M_\oplus \left(\frac{F_{\rm 225 {\rm GHz}}}{1\,{\rm mJy}}\right) \left(\frac{d}{140\,{\rm pc}}\right)^2, \label{eq:kappa} \end{equation} where $B_\nu$ is the Planck function. This approach is justified based on several reasons; most sources in the survey only have one millimeter wavelength flux measurement and are not well resolved; detailed radiative transfer modeling of ALMA disk images around protostars with similar luminosities as here find masses that are very similar to those derived from this equation \citep{2017A&A...606A..88T}; it allows the cleanest comparison of disks both within the large Ophiuchus sample and between surveys of other regions without adding uncertain effects from poorly constrained parameter variations. With that said, we return to the inherent assumptions and possible complexities in the discussion of the results in \S\ref{sec:discussion}. A disk was deemed to be detected by ALMA if the measured flux density was at least three times greater than the rms and the source position was within $1''$ of the \emph{Spitzer} coordinates. For nondetections, we set a mass upper limit for a flux density threshold of three times the rms. Cumulative mass distributions were then determined using survival analysis, which uses the constraints from upper limits in the data \citep{1985ApJ...293..192F}. When plotted against the logarithm of the mass, we found that these were well fit by the integral of a gaussian indicating that the underlying probability distribution function is log-normal. To aid the visualization of the distributions, we therefore show both observed cumulative distributions with a $\pm 1\sigma$ range, and the corresponding range of gaussian fits to the probability distribution in the figures below. \subsection{Across Protostellar Evolutionary Classes} \label{subsec:YSOclasses} The dust mass distributions for each protostellar Class in Ophiuchus are plotted in Figure~\ref{fig:YSOclasses}. For multiple systems, we have only included the primary disk as defined by the infrared brightness because the evolutionary state of the secondary (or tertiary) member is not known from the {\emph Spitzer} data. \begin{figure}[ht] \plottwo{odisea-cdf.pdf}{odisea-pdf.pdf} \caption{Dust mass distributions for Ophiuchus disks around protostars of different infrared evolutionary states. The cumulative distributions derived from the censored data are shown in the left hand plot, where the shading illustrates the $1\sigma$ uncertainty at each mass and the colors indicate protostellar class. The corresponding gaussian distributions for the probability distribution function for the Class I, Flat, and Class II sources is shown in the right panel, where the shading now illustrates the range of allowed fits.} \label{fig:YSOclasses} \end{figure} The cumulative distributions shift to lower masses as protostars evolve. However, the gaussian fits show that the mean Class I disk mass is only about a factor of 5 higher than that of Class II, a difference that is less than the standard deviation (Table~\ref{tab:YSOclasses}). The upper quartile of Class II disks is more massive than the Class I mean and the large overlap is clear in the plot of the fitted probability distributions. Surprisingly, not all Class I or Flat Spectrum sources were detected despite a $3\sigma$ mass sensitivity of $\sim 0.3\,M_\oplus$. In contrast, we detected three Class III sources with dust masses $0.2-0.4\,M_\oplus$. This is not enough to strongly constrain the full distribution but suggests that more of these evolved disks may be detectable with a moderate increase in sensitivity. \begin{deluxetable}{crrr}[ht] \tablecaption{Gaussian fit to Ophiuchus Disks \label{tab:YSOclasses}} \tablecolumns{3} \tablenum{2} \tablewidth{0pt} \tablehead{ \colhead{YSO Class} & \colhead{N} & \colhead{$\mu(M/M_\oplus)$\tablenotemark{a}}& \colhead{$\sigma({\rm log_{10}} (M/M_\oplus))$}\\[-5mm] } \startdata I & 28 & $3.83^{+1.62}_{-1.31}$ & $0.86^{+0.06}_{-0.02}$ \\[1mm] Flat & 50 & $2.49^{+0.82}_{-0.82}$ & $0.83^{+0.03}_{-0.01}$ \\[1mm] II & 172 & $0.78^{+0.12}_{-0.11}$ & $0.97^{+0.06}_{-0.05}$ \\[1mm] \enddata \tablenotetext{a}{The fits are for a gaussian in the logarithm of the mass with mean value $\log_{10}\mu$.} \end{deluxetable} Our results extend a recent ALMA survey of 49 Ophiuchus disks at $870\,\mu$m that was weighted toward Class II sources but which found higher average flux densities toward the small subset of Class I and Flat Spectrum objects in their sample \citep{2017ApJ...851...83C}. They also found that binaries have significantly lower flux densities. We have carried out high-resolution infrared imaging to identify and study the effect of multiplicity on disks and will discuss this in a future paper (Zurlo, A. et al. in preparation). There are 40 binaries in our sample, a small fraction of the total, and we found that omitting them from the disk distributions here did not significantly change the log-normal fits. \citet{2018ApJS..238...19T} found substantially higher disk masses for Class 0 and I sources in Perseus based on VLA 9\,mm data. However, there can be significant free-free emission at these long wavelengths and the separation of the ionized gas and dust components is difficult. A comparison of the two regions at millimeter and centimeter wavelengths would help clarify the differences. \citet{2019ApJ...873...54A} and \citet{2018ApJS..238...19T} derived disk masses for Class I sources in Perseus and found values that lie at the upper end of the Ophiuchus Class I distribution here (when differences in the conversion from flux to mass are taken into account). Their calculations require an extra step due, respectively, to lower resolution 1\,mm data that blended disk and envelope emission and and longer wavelength 9\,mm data that blended dust and free-free emission. A complete, millimeter survey at sub-arcsecond resolution is required to quantitatively compare the two regions. \subsection{Across Star-Forming Regions} \label{subsec:other_regions} ALMA has now surveyed the Class II disk population in several star-forming regions. In Paper I, we found that the cumulative mass distribution for sample A was similar to disks in the Taurus and Lupus regions. However, about half of the Ophiuchus sources in that plot were Class I and Flat Spectrum sources and we showed above that these tend to have more massive disks than those around Class II. Furthermore, the Class II sources in sample A are brighter in K-band and the disk mass scales with stellar mass \citep{2013ApJ...771..129A}. With the addition of sample B, we can now make a statistically fairer comparison of complete populations of Class II sources. We show the Ophiuchus results in comparison with the ALMA surveys of the Lupus and Upper Scorpius regions because these are similarly sensitive to sub-Earth masses of dust and are complete or very nearly so. Both regions are at a similar distance to Ophiuchus but are older, $\sim 150$\,pc and $\sim 3$\,Myr for Lupus and $\sim 145$\,pc and $\sim 10$\,Myr for Upper Scorpius \citep{2018ApJ...859...21A, 2016ApJ...827..142B}. Disk masses were recalculated from the observed flux densities with the same uniform temperature and opacity prescription as in Equation~\ref{eq:kappa}. A proper comparison across regions requires accounting for possible differences in the host stars. Spectral types are not yet known for the full ODISEA sample (the analysis of optical and infrared spectra will be presented in a future paper in preparation by Ruiz-Rodriguez, D. et al. in preparation) so we use a $1.2\,\mu$m J-band magnitude cutoff of 12 mag to restrict the comparison to stars with masses estimated to be $\gtrsim 0.2\,M_\odot$ in Figure~\ref{fig:regions}. \begin{figure}[ht] \plottwo{regions-cdf.pdf}{regions-pdf.pdf} \caption{Dust mass distributions as in Figure~\ref{fig:YSOclasses}, but now for disks around Class II protostars with estimated stellar masses $\gtrsim 0.1\,M_\odot$ in different star-forming regions.} \label{fig:regions} \end{figure} We now find that the Ophiuchus disks tend to have lower masses than those in Lupus. The change is not surprising given the bias toward more massive disks in sample A but the result upends the conventional wisdom that disk masses decline monotonically with the age of the star-forming region. The separation between the cumulative distributions is robust to different J-band (or K-band) magnitude cutoffs and is discussed in more detail below. Low disk masses were also recently reported in the similarly young Corona Australis region \citep{2019arXiv190402409C}. The results of the gaussian fits to the cumulative distributions from different regions are provided in Table~\ref{tab:regions}. The probability distributions are uniformly broad but the mean disk mass appears to start low, then increase, before decreasing at late times. \begin{deluxetable}{ccrr}[ht] \tablecaption{Gaussian fits to Class II Disks in Different Regions\label{tab:regions}} \tablecolumns{4} \tablenum{3} \tablewidth{0pt} \tablehead{ \colhead{Region} & \colhead{Age (Myr)} & \colhead{$\mu(M/M_\oplus)$}& \colhead{$\sigma({\rm log_{10}} (M/M_\oplus))$}\\ } \startdata Ophiuchus & $\sim 1$ & $2.62^{+0.83}_{-0.65}$ & $0.88^{+0.06}_{-0.04}$ \\[1mm] Lupus & $\sim 3$ & $5.08^{+1.78}_{-1.41}$ & $0.82^{+0.01}_{-0.01}$ \\[1mm] Upper Scorpius & $\sim 10$ & $0.36^{+0.10}_{-0.09}$ & $0.88^{+0.08}_{-0.05}$ \\[1mm] \enddata \end{deluxetable} \section{Discussion} \label{sec:discussion} The ODISEA project is the largest complete ALMA disk survey of a single region to date and provides an unprecedented view of the disk mass distribution across protostellar evolutionary states. We find that disks are more massive in the early Class I stage, and steadily decline through the Flat Spectrum to the Class II stage. However, the difference in the mean mass from Class I to Class II is only a factor of about 5 and the distributions have a large overlap. In general, the dust masses are very low: only 10\% of Class I sources have dust masses greater than the estimated total in the solar system, $30\,M_\oplus$ \citep{1981PThPS..70...35H}; and over half are less massive than $4\,M_\oplus$. Based on exoplanet demographics, disks have a ``missing mass'' problem, as noted on several occasions for Class II disks \citep[most recently by][]{2018A&A...618L...3M}, and we now see that it extends to the young, embedded Class I phase. We must therefore consider the validity of the assumptions about temperature, opacity coefficient, and optical depth in \S\ref{sec:mass}. The assumed temperature, 20\,K, is already low and does not leave much room for increasing the mass. The beautiful high-resolution ALMA survey of Class II disks by \citet{2018ApJ...869L..41A} shows that the optical depth at 225\,GHz is generally less than one at $\sim 5$\,au scales although there could be smaller, very high density concentrations \citep{2018ApJ...865..157A}. The opacity coefficient depends on size, mineralogy, and shape of the dust grains but does not vary by more than a factor of $\sim 3$ for all but the most extreme set of these parameters \citep{1994ApJ...421..615P}. However, any such flux-to-mass conversions are ultimately limited to constraints on the mass of particles with sizes comparable to the observing wavelength and there is ample reason to expect considerable mass in much larger bodies at early times: for example, the existence of differentiated meteorites within $\sim 0.4$\,Myr after the first solids in the solar system \citep{2017PNAS..114.6712K}; the detection of a planet in a Class II disk \citep{2016ApJ...826..206J}; and its implicit assumption in planetary population synthesis models \citep{2018haex.bookE.143M}. Despite their youth, the Ophiuchus Class II disks have slightly lower masses than Class II disks in the older Lupus region. A similar result was recently found for the comparably young Corona Australis region \citep{2019arXiv190402409C}. Based on the systematic decline in dust masses from $\sim 3$ to 10\,Myr \citep{2017AJ....153..240A}, we would expect the Ophiuchus and Corona Australis disks to be more, not less, massive. Because disk masses also correlate with stellar mass, such comparisons between regions consider possible differences in the stellar sample. The stellar initial mass function is known to be quite universal \citep{2014prpl.conf...53O} although there may be differences around the substellar boundary. We have used a simple near-infrared luminosity cutoff to stay above that limit and will explore this further as we learn more about the stellar properties across the full sample. The inherently limited sample sizes restrict the statistical rigor with which evolutionary trends can be dissected by stellar mass and other parameters, but the low disk masses in the $\sim 1$\,Myr old Ophiuchus and Corona Australis regions appear to be an inherent trait. One possibility is simply that disk masses depend on the local (cloud) environment where the stars form. Such differences do not, however, appear to manifest themselves in stellar properties such as mass distribution or binarity. A more speculative, though exciting, alternative is that the young Class II disks in Ophiuchus and Corona Australis are indeed protoplanetary with most of the mass in planetesimal and smaller sizes, and that the slightly older Class II disks in Lupus and Taurus might be closer to the peak of planet formation with Earth masses of second-generation dust produced as the disk is stirred up due to the aggregation of planetesimals into protoplanets. Demographic studies such as these provide useful insights into disk evolution and planet formation. Nevertheless, as with exoplanet observations, it is important to go beyond a single measure of an object and to gather more information. An important next step will be higher resolution observations to measure sizes and structure, and to see how these vary with protostellar class, stellar mass, and from region to region. \acknowledgments J.P.W. thanks Ewine van Dishoeck for comments. L.C. was supported by CONICYT-FONDECYT grant number 1171246. A.Z. acknowledges support from the CONICYT + PAI/ Convocatoria nacional subvenci\'on a la instalaci\'on en la academia, convocatoria 2017 + Folio PAI77170087. This paper makes use of the following ALMA data: ADS/JAO.ALMA \#2016.1.00545.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \vspace{5mm} \facilities{ALMA} \software{astropy \citep{2013A&A...558A..33A}}
{ "timestamp": "2019-04-16T02:06:24", "yymm": "1904", "arxiv_id": "1904.06471", "language": "en", "url": "https://arxiv.org/abs/1904.06471" }
\section{Introduction} Recent years have seen a surge of interests in fine-grained entity typing \textbf{(FET)} as it serves as an important cornerstone of several nature language processing tasks including relation extraction \cite{mintz2009distant}, entity linking \cite{DBLP:conf/aaai/RaimanR18}, and knowledge base completion \cite{dong2014knowledge}. To reduce manual efforts in labelling training data, distant supervision \cite{mintz2009distant} has been widely adopted by recent FET systems. With the help of an external knowledge base (KB), an entity mention is first linked to an existing entity in KB, and then labeled with all possible types of the KB entity as supervision. However, despite its efficiency, distant supervision also brings the challenge of \textbf{out-of-context noise}, as it assigns labels in a context agnostic manner. Early works usually ignore such noise in supervision \cite{ling2012fine,shimaoka2016attentive}, which dampens the performance of distantly supervised models. \begin{figure}[t] \centering \hspace{-0.7em} \subfloat{ \includegraphics[width=0.5\linewidth]{NFETCmentionVIZ.pdf}} \subfloat{ \includegraphics[width=0.5\linewidth]{CLSCmentionVIZ.pdf}} \caption{T-SNE visualization of the mention embeddings generated by NFETC (left) and CLSC (right) on the BBN dataset. Our model (CLSC) clearly groups mentions of the same type into a compact cluster.} \label{fig:tsneVis} \vspace{-1em} \end{figure} \begin{figure*} \includegraphics[width=1.0\linewidth,height=0.4\linewidth]{newframe.pdf} \caption{The overall framework of CLSC. We calculate classification loss only on clean data, while regularize the feature extractor with CLSC using both clean and noisy data.} \label{fig:fk} \end{figure*} Towards overcoming out-of-context noise, two lines of work have been proposed to distantly supervised FET. The first kind of work try to filter out noisy labels using heuristic rules~ \cite{gillick2014context}. However, such heuristic pruning significantly reduces the amount of training data, and thus cannot make full use of distantly annotated data. In contrast, the other thread of works try to incorporate such imperfect annotation by partial-label loss (\textbf{PLL}). The basic assumption is that, \emph{for a noisy mention, the maximum score associated with its candidate types should be greater than the scores associated with any other non-candidate types} \cite{ren2016afet,abhishek2017fine,xu2018neural}. Despite their success, \textbf{PLL}-based models still suffer from \textbf{\emph{Confirmation Bias}} by taking its own prediction as optimization objective in the next step. Specifically, given an entity mention, if the typing system selected a wrong type with the maximum score among all candidates, it will try to further maximize the score of the wrong type in following optimization epoches (in order to minimize \textbf{PLL}), thus amplifying the confirmation bias. Such bias starts from the early stage of training, when the typing model is still very suboptimal, and can accumulate in training process. Related discussion can be also found in the setting of semi-supervised learning \cite{lee2006fine,Laine2017iclr,tarvainen2017mean}. In this paper, we propose a new method for distantly supervised fine-grained entity typing. Enlightened by \cite{pmlr-v80-kamnitsas18a}, we propose to effectively utilize imperfect annotation as model regularization via \textbf{C}ompact \textbf{L}atent \textbf{S}pace \textbf{C}lustering \textbf{(CLSC)}. More specifically, our model encourages the feature extractor to group mentions of the same type as a compact cluster (dense region) in the representation space, which leads to better classification performance. For training data with noisy labels, instead of generating pseudo supervision by the typing model itself, we dynamically construct a similarity-weighted graph between clean and noisy mentions, and apply label propagation on the graph to help the formation of compact clusters. Figure \ref{fig:tsneVis} demonstrates the effectiveness of our method in clustering mentions of different types into dense regions. In contrast to \textbf{PLL}-based models, we do not force the model to fit pseudo supervision generated by itself, but only use noisy data as part of regularization for our feature extractor layer, thus avoiding bias accumulation. \\ Extensive experiments on standard benchmarks show that our method consistently outperforms state-of-the-art models. Further study reveals that, the advantage of our model over the competitors gets even more significant as the portion of noisy data rises. \begin{figure*} \centering \includegraphics[width=1.05\linewidth,height=0.4\linewidth]{feature_representation3.pdf} \caption{The architecture of feature extractor $z((m_i,c_i);\theta_z)$} \label{fig:feature extractor} \end{figure*} \section{Problem Definition} Fine-grained entity typing takes a corpus and an external knowledge base (KB) with a type hierarchy $\mathcal{Y}$ as input. Given an entity mention (i.e., a sequence of token spans representing an entity) in the corpus, our task is to uncover its corresponding type-path in $\mathcal{Y}$ based on the context. By applying distant supervision, each mention is first linked to an existing entity in KB, and then labeled with all its possible types. Formally, a labeled corpus can be represented as triples $\mathcal{D}=\{(m_i,c_i,\mathcal{Y}_i)\}_{i=1}^n$, where $m_i$ is the $i$-th mention, $c_i$ is the context of $m_i$, $\mathcal{Y}_i$ is the set of candidate types of $m_i$. Note that types in $\mathcal{Y}_i$ can form one or more type paths. In addition, we denote all terminal (leaf) types of each type path in $\mathcal{Y}_i$ as the target type set $\mathcal{Y}_i^t$ (\textit{e.g.}, for $\mathcal{Y}_i=\{artist, teacher,person\}$, $\mathcal{Y}_i^t=\{artist,teacher\}$). This setting is also adopted by \cite{xu2018neural}. As each entity in KB can have several type paths, \emph{out-of-context} noise may exist when $\mathcal{Y}_i$ contains type paths that are irrelevant to $m_i$ in context $c_i$. In this work, we argue triples where $\mathcal{Y}_i$ contains only one type path (i.e., $|\mathcal{Y}^t_i| = 1$) as \textbf{clean data}. Other triples are treated as \textbf{noisy data}, where $\mathcal{Y}_i$ contains both the true type path and irrelevant type paths. Noisy data usually takes a considerable portion of the entire dataset. The major challenge for distantly supervised typing systems is to incorporate both clean and noisy data to train high-quality type classifiers. \section{The Proposed Approach} \label{sec:method} \noindent \textbf{Overview.} The basic assumptions of our idea are: (1) all mentions belong to the same type should be close to each other in the representation space because they should have similar context, (2) similar contexts lead to the same type. For clean data, we compact the representation space of the same type to comply (1). For noisy data, given assumption (2), we infer the their type distributions via label propagation and candidate types constrain. \\ Figure \ref{fig:fk} shows the overall framework of the proposed method. Clean data is used to train classifier and feature extractor end-to-endly, while noisy data is only used in CLSC regularization. Formally, given a batch of samples $\{(m_i,c_i,\mathcal{Y}_i^t)\}_{i=1}^B$, we first convert each sample $(m_i,c_i)$ into a real-valued vector $z_i$ via a feature extractor $z((m_i,c_i);\theta_z)$ parameterized by $\theta_z$. Then a type classifier $g(z_i;\theta_g)$ parameterized by $\theta_g$ gives the posterior $P(y|z_i;\theta_g)$. By incorporating CLSC regularization in the objective function, we encourage the feature extractor $z$ to group mentions of the same type into a compact cluster, which facilitates classification as is shown in Figure \ref{fig:tsneVis}. Noisy data enhances the formation of compact clusters with the help of label propagation. \subsection{Feature Extractor} Figure \ref{fig:feature extractor} illustrates our feature extractor. For fair comparison, we adopt the same feature extraction pipeline as used in \cite{xu2018neural}. The feature extractor is composed of an embedding layer and two encoders which encode mentions and contexts respectively.\\ \smallskip \noindent \textbf{Embedding Layer:}\label{sec:emb layer} The output of this layer is a concatenation of word embedding and word position embedding. We use the popular 300-dimensional word embedding supplied by \cite{pennington2014glove} to capture the semantic information and random initialized position embedding \cite{zeng2014relation} to acquire information about the relation between words and the mentions. Formally, Given a word embedding matrix $W_{word}$ of shape $d_w\times|V|$, where $V$ is the vocabulary and $d_w$ is the size of word embedding, each column of $W_{word}$ represents a specific word $w$ in $V$. We map each word $w_j$ in $(m_i,c_i)$ to a word embedding $\mathbf{{w}_j^d}\in R^{d_w}$. Analogously, we get the word position embedding $\mathbf{w}_j^p\in R^{d_p}$ of each word according to the relative distance between the word and the mention, we only use a fixed length context here. The final embedding of the j-th word is $\bf{w}_j^E= [\bf{{w}_j^d},\bf{{w}_j^p}]$. \\ \smallskip \noindent \textbf{Mention Encoder:} To capture lexical level information of mentions, an averaging mention encoder and a LSTM mention encoder \cite{hochreiter1997long} is applied to encode mentions. Given $m_i=(w_s,w_{s+1},\cdots,w_{e})$, the averaging mention representation $r_{a_i}\in R^{d_w}$ is : \begin{equation} r_{a_i}=\frac{1}{e-s+1}\sum_{j=s}^e\bf{{w}_j^d} \end{equation} By applying a LSTM over an extended mention $(w_{s-1},w_s,w_{s+1},\cdots,w_e,w_{e+1})$, we get a sequence $(h_{s-1},h_s,h_{s+1},\cdots,h_e,h_{e+1})$. We use $h_{e+1}$ as LSTM mention representation $r_{l_i}\in R^{d_l}$. The final mention representation is $r_{m_i}=[r_{a_i},r_{l_i}]\in R^{{d_w}+{d_l}}$.\\ \smallskip \noindent \textbf{Context Encoder:} A bidirectional LSTM with $d_l$ hidden units is employed to encode embedding sequence $(\bf{w}_{s-W}^E,\bf{w}_{s-W+1}^E,\cdots,\bf{w}_{e+W}^E)$: \begin{equation} \begin{aligned} \overrightarrow{h_{j}}=& LSTM(\overrightarrow{h_{j-1}},\bf{w}_{j-1}^E)\\ \overleftarrow{h_{j}}=& LSTM(\overleftarrow{h_{j-1}},\bf{w}_{j-1}^E)\\ h_{j}=&[\overrightarrow{h_{j}}\oplus\overleftarrow{h_{j}}] \end{aligned} \end{equation} where $\oplus$ denotes element-wise plus. Then, the word-level attention mechanism computes a score $\beta_{i,j}$ over different word $j$ in the context $c_i$ to get the final context representation $r_{c_i}$: \begin{equation} \begin{aligned} \alpha_j=& w^Ttanh(h_j)\\ \beta_{i,j}=&\frac{exp(\alpha_j)}{\sum\limits_{k}exp(\alpha_k)}\\ r_{c_i}=&\sum\limits_{j}\beta_{i,j}h_{i,j} \end{aligned} \end{equation} We use $r_i=[r_{m_i},r_{c_i}]\in R^{d_z}=R^{{d_w}+{d_l}+{d_l}}$ as the feature representation of $(m_i,c_i)$ and use a Neural Networks $q$ over $r_i$ to get the feature vector $z_i$. $q$ has $n$ layers with $h_n$ hidden units and use ReLu activation. \subsection{Compact Latent Space Clustering for Distant Supervision} \label{sec: CSLC} \begin{figure*} \includegraphics[width=1.0\linewidth,height=0.32\linewidth]{BnotA-wdc-newcclp.pdf} \caption{A demonstration of the CLSC process. (a) represents the feature extraction step; (b)$\rightarrow$(h) shows the traditional type classification process (each color represents one candidate type), where suboptimal classifiers make predictions for each mention and misclassifies A into the Blue type; (c)$\rightarrow$(d)$\rightarrow$(e)$\rightarrow$(f)$\rightarrow$(g) demonstrates the process of CLSC as described in Section \ref{sec:method}. Through label propagation and compact clustering, our model is able to group mentions of the same type into a dense region and leaves clear separation boundaries in sparse regions.} \label{fig:CLSC} \end{figure*} The overview of CLSC regularization is exhibited in Figure \ref{fig:CLSC}, which includes three steps: dynamic graph construction (Figure \ref{fig:CLSC}c), label propagation (Figure \ref{fig:CLSC}d, e) and Markov chains (Figure \ref{fig:CLSC}g). The idea of compact clustering for semi-supervised learning is first proposed by \cite{pmlr-v80-kamnitsas18a}. The basic idea is to encourage mentions of the same type to be clustered into a dense region in the embedding space. We introduce more details of CLSC for distantly supervised FET in following sections. \smallskip \noindent \textbf{Dynamic Graph Construction:} We start by creating a fully connected graph $G$ over the batch of samples $\mathbf{Z}=\{z_i\}_{i=1}^B$, as shown in Figure \ref{fig:CLSC}c\footnote{$\mathbf{Z}=\{z_i\}_{i=1}^B$ is a small subsample of the entire data, we didn't observe significant performance gain when the batch size increases.}. Each node of $G$ is a feature representation $z_i$, while the distance between nodes is defined by a scaled dot-product distance function \cite{vaswani2017attention}: \begin{equation} \begin{aligned} A_{ij}=&exp(\frac{z_i^Tz_j}{\sqrt{d_z}}), \forall z_i,z_j\in{\mathbf{Z}}\\ A=&exp(\frac{Z^TZ}{\sqrt{d_z}})\label{eq:distancefunc} \end{aligned} \end{equation} Each entry $A_{ij}$ measures the similarity between $z_i$ and $z_j$, $A\in{R}^{B\times{B}}$ can be viewed as the weighted adjacency matrix of $G$. \smallskip \noindent \textbf{Label Propagation:}\label{subsub:lp} The end goal of CLSC is to cluster mentions of the same type to a dense region. For mentions which have more than one labeled types, we apply label propagation (\textbf{LP}) on $G$ to estimate their type distribution. Formally, we denote $\mathbf{\Phi}\in R^{B\times{K}}$ as the label propagation posterior of a training batch. The original label propagation proposed by \cite{zhu2002learning} uses a transition matrix $H$ to model the probability of a node $i$ propagating its type posterior $\mathbf{\phi}_i=P(y_i|x_i)\in{R}^K$ to the other nodes. Each entry of the transition matrix $H\in{R}^{B\times{B}}$ is defined as: \begin{equation} H_{ij}=A_{ij}/\sum_b{A_{ib}} \end{equation} The original label propagation algorithm is defined as: \begin{enumerate} \item Propagate the label by transition matrix $H$, $\mathbf{\Phi}^{(t+1)}={H}\mathbf{\Phi}^{(t)}$ \item Clamp the labeled data to their true labels. Repeat from step 1 until $\mathbf{\Phi}$ converges \end{enumerate} In this work $\Phi^{(0)}$ is randomly initialized\footnote{We also explored other initialization (e.g. uniform initialization), but found no essential performance difference between different initialization setups.}. Unlike unlabeled data in semi-supervised learning, distantly labeled mentions in FET have a limited set of candidate types. Based on this observation, We assume that $(m_i,c_i)$ can only transmit and receive probability of types in $\mathcal{Y}_i^t$ no matter it is noisy data or clean data. Formally, define a ${B}\times{K}$ indicator matrix $M\in{R}^{B\times{K}}$, where $M_{ij}=1$ if type j in $\mathcal{Y}_i^t$ otherwise $0$, where $B$ is the batch size and $K$ is the number of types. Our clamping step relies on $M$ as is shown in Figure \ref{fig:CLSC}d: \begin{equation} \Phi_{ij}^{(t+1)}\leftarrow\Phi_{ij}^{(t+1)}M_{ij}/\sum_k{\Phi_{ik}^{(t+1)}M_{ik}} \end{equation} For convenience, we iterate through these two steps $S_{lp}$ times, $S_{lp}$ is a hyperparameter. \smallskip \noindent \textbf{Compact Clustering:} \label{sec:method-compact} The \textbf{LP} posterior $\mathbf{\Phi}=\mathbf{\Phi}^{(S_{lp}+1)}$ is used to judge the label agreement between samples. In the desired optimal state, transition probabilities between samples should be uniform inside the same class, while be zero between different classes. Based on this assumption, the desirable transition matrix $T\in{R}^{B\times{B}}$ is defined as: \begin{equation} T_{ij}=\sum_{k=1}^K\Phi_{ik}\frac{\Phi_{jk}}{m_k}, m_k=\sum_{b=1}^B\Phi_{bk} \end{equation} $m_k$ is a normalization term for class $k$. Transition matrix $H$ derived from $z((m_i,c_i);\theta_z)$ should be in keeping with $T$. Thus we minimize the cross entropy between $T$ and $H$: \begin{equation} \mathcal{L}_{1-step}=-\frac{1}{B^2}\sum_{i=1}^B\sum_{j=1}^BT_{ij}log(H_{ij})\label{1-step loss} \end{equation} \noindent For instance, if $T_{ij}$ is close to 1, $H_{ij}$ needs to be bigger, which results in the growth of $A_{ij}$ and finally optimize $\theta_z$ (Eq.\ref{eq:distancefunc}). The loss $\mathcal{L}_{1-step}$ has largely described the regularization we use in $z((m_i,c_i);\theta_z)$ for compression clustering. In order to keep the structure of existing clusters, \cite{pmlr-v80-kamnitsas18a} proposed an extension of $\mathcal{L}_{1-step}$ to the case of \textbf{Markov chains} with multiple transitions between samples, which should remain within a single class. The extension maximizes probability of paths that only traverse among samples belong to one class. Define $E\in{R}_{B\times{B}}$ as: \begin{equation} E=\mathbf{\Phi}^T\mathbf{\Phi} \end{equation} $E_{ij}$ measures the label similarities between $z_i$ and $z_j$, which is used to mask the transition between different clusters. The extension is given by: \begin{equation} \begin{aligned} H^{(1)} = &H\\ H^{(s)} = &(H\odot{E})^{(s-1)}H\\ = &(H\odot{E})H^{(s-1)}, \end{aligned} \end{equation} where $\odot{}$ is Hadamard Product, and $H_{ij}^{(s)}$ is the probability of a Markov process to transit from node $i$ to node $j$ after $s-1$ steps within the same class. The extended loss function models paths of different length $s$ between samples on the graph: \begin{equation} \mathcal{L}_{clsc}=-\frac{1}{S_m}\frac{1}{B^2}\sum_{s=1}^{S_m}\sum_{i=1}^B\sum_{j=1}^BT_{ij}log(H_{ij}^{(s)}).\label{eq:cclploss} \end{equation} For $S_m=1$, $\mathcal{L}_{clsc}=\mathcal{L}_{1-step}$. By minimizing the cross entropy between $T$ and $H^{(s)}$ (Eq.\ref{eq:cclploss}), $\mathcal{L}_{clsc}$ compact paths of different length between samples within the same class. Here, $S_{m}$ is a hyper-parameter to control the maximum length of Markov chain. $\mathcal{L}_{clsc}$ is added to the final objective function as regularization to encourage compact cluttering. \subsection{Overall Objective} Given the representation of a mention, the type posterior is given by a standard softmax classifier parameterized by $\theta_g$: \begin{equation} P(\hat{y_i}|z_i;\theta_g)=softmax(W_c{z_i}+b_c), \end{equation} where $W_c\in{R^{K\times{d_z}}}$ is a parameter matrix, $b\in{R^K}$ is the bias vector, where $K$ is the number of types. The predicted type is then given by $\hat{t_i}=argmax_{y_i}P(\hat{y_i}|z_i;\theta_g)$. Our loss function consists of two parts. $\mathcal{L}_{sup}$ is supervision loss defined by KL divergence: \begin{equation} \begin{aligned} L_{sup}=&-\frac{1}{B_c}\sum_{i=1}^{B_c}\sum_{k=1}^Ky_{ik}log(P(y_i|z_i;\theta_g))_k \label{eq:clean loss} \end{aligned} \end{equation} Here $B_c$ is the number of clean data in a training batch, K is the number of target types. The regularization term is given by $\mathcal{L}_{clsc}$. Hence, the overall loss function is: \begin{equation} \mathcal{L}_{final}=\mathcal{L}_{sup}+\lambda_{clsc}\times\mathcal{L}_{clsc} \end{equation} $\lambda_{clsc}$ is a hyper parameter to control the influence of CLSC. \begin{table*}[] \small \centering \begin{tabular}{llcccccc} \toprule[2pt] \multicolumn{2}{c}{\multirow{2}{*}{\textbf{Method}}} & \multicolumn{3}{c}{\textbf{OntoNotes}} & \multicolumn{3}{c}{\textbf{BBN}} \\ \cmidrule[1pt]{3-8} \multicolumn{2}{c}{} & \textbf{Strict Acc.} & \multicolumn{1}{l}{\textbf{Macro F1}} & \multicolumn{1}{l}{\textbf{Micro F1}} & \textbf{Strict Acc.} & \textbf{Macro F1} & \textbf{Micro F1} \\ \midrule[1pt] \multicolumn{2}{l}{\textbf{AFET} \cite{ren2016afet}} & 55.3 & 71.2 & 64.6 & 68.3 & 74.4 & 74.7 \\ \multicolumn{2}{l}{\textbf{{AAA}} \cite{abhishek2017fine}} & 52.2 & 68.5 & 63.3 & 65.5 & 73.6 & 75.2 \\ \multicolumn{2}{l}{\textbf{Attentive}~ \cite{shimaoka2016attentive}} & 51.7 & 71.0 & 64.91 & 48.4 & 73.2 & 72.4 \\ \multicolumn{2}{l}{\textbf{PLE+HYENA} \cite{ren2016label}} & 54.6 & 69.2 & 62.5 & 69.2 & 73.1 & 73.2 \\ \multicolumn{2}{l}{\textbf{{PLE+FIGER}}~~ \cite{ren2016label}} & 57.2 & 71.5 & 66.1 & 68.5 & 77.7 & 75.0 \\ \midrule[1pt] \midrule[1pt] \multirow{2}{*}{\textbf{NFETC}} & $clean$ & 54.4$\pm$0.3 & 71.5$\pm$0.4 & 64.9$\pm$0.3 & 71.2$\pm$0.2 & 77.1$\pm$0.3 & 76.9$\pm$0.3 \\ \cmidrule[1pt]{2-8} & $+noisy$ & 54.8$\pm$0.4 & 71.8$\pm$0.4 & 65.0$\pm$0.4 & 73.8$\pm$0.6 & 78.4$\pm$0.6 & 78.9$\pm$0.6 \\ \midrule[1pt] \multirow{2}{*}{\textbf{NFETC}\textsubscript{$hier$}} & $clean$ & 59.6$\pm$0.2 & 76.1$\pm$0.2 & 69.7$\pm$0.2 & 70.3$\pm$0.3 & 76.8$\pm$0.3 & 76.6$\pm$0.2 \\ \cmidrule[1pt]{2-8} & $+noisy$ & 60.2$\pm$0.2 & 76.4$\pm$0.1 & 70.2$\pm$0.2 & \textbf{73.9$\pm$1.2} & 78.8$\pm$1.2 & \textbf{79.4$\pm$1.1} \\ \midrule[1pt]\midrule[1pt] \multirow{2}{*}{\textbf{NFETC-CLSC}} & $clean$ & 59.1$\pm$0.4 & 75.3$\pm$0.3 & 69.1$\pm$0.3 & 73.0$\pm$0.3 & 79.0$\pm$0.3 & 78.8$\pm$0.3 \\ \cmidrule[1pt]{2-8} & $+noisy$ & 59.6$\pm$0.2 & 75.5$\pm$0.4 & 69.3$\pm$0.4 & \textbf{74.7$\pm$0.3} & \textbf{80.7$\pm$0.2} & \textbf{80.5$\pm$0.2} \\ \midrule[1pt] \multirow{2}{*}{\textbf{NFETC-CLSC}\textsubscript{$hier$}} & $clean$ & 61.5$\pm$0.3 & 77.4$\pm$0.3 & 71.4$\pm$0.4 & 70.5$\pm$0.2 & 78.2$\pm$0.2 & 78.0$\pm$0.2 \\ \cmidrule[1pt]{2-8} & $+noisy$ & \textbf{62.8$\pm$0.3} & \textbf{77.8$\pm$0.4} & \textbf{72.0$\pm$0.4} & 71.9$\pm$0.3 & 79.8$\pm$0.4 & 79.5$\pm$0.3 \\ \bottomrule[2pt] \end{tabular} \caption{Performance comparision of FET systems on the two datasets.}\label{tb:results} \end{table*} \section{Experiments} \begin{table}[] \centering \small \begin{tabular}{|l|l|l|} \hline & \textbf{OntoNotes} & \textbf{BBN} \\ \hline \textbf{\#types} & 89 & 47 \\ \hline \textbf{Max hierarchy depth} & 3 & 2 \\ \hline \textbf{\#mentions-train} & 253241 & 86078 \\ \hline \textbf{\#mentions-test} & 8963 & 12845 \\ \hline \textbf{\%clean mentions-train} & 73.13 & 75.92 \\ \hline \textbf{\%clean mentions-test} & 94.00 & 100 \\ \hline \textbf{Average $|\mathcal{Y}_i^t|$} & 1.40 & 1.26 \\ \hline \end{tabular} \caption{Detailed statistics of the two datasets.} \label{tb:stati} \end{table} \subsection{Dataset} We evaluate our method on two standard benchmarks: OntoNotes and BBN: \begin{itemize} \item \textbf{OntoNotes:} The OntoNotes dataset is composed of sentences from the Newswire part of OntoNotes corpus \cite{weischedel2013ontonotes}. \cite{gillick2014context} annotated the training part with the aid of DBpedia spotlight \cite{daiber2013improving}, while the test data is manually annotated. \item \textbf{BBN:} The BBN dataset is composed of sentences from Wall Street Journal articles and is manually annotated by \cite{weischedel2005bbn}. \cite{ren2016afet} regenerated the training corpus via distant supervision. \end{itemize} In this work we use the preprocessed datasets provided by \cite{abhishek2017fine,xu2018neural}. Table \ref{tb:stati} shows detailed statistics of the datasets. \subsection{Compared Methods} We compare the proposed method with several state-of-the-art FET systems\footnote{The baselines result are reported on \cite{abhishek2017fine,xu2018neural} in addition to performance of NFETC on BBN, we search the hyper parameters for it. \cite{xu2018neural} didn't report the results on BBN}: \begin{itemize} \item \textbf{Attentive} \cite{shimaoka2016attentive} uses an attention based feature extractor and doesn't distinguish clean from noisy data; \item \textbf{AFET} \cite{ren2016afet} trains label embedding with partial label loss; \item \textbf{AAA} \cite{abhishek2017fine} learns joint representation of mentions and type labels; \item \textbf{PLE+HYENA/FIGER} \cite{ren2016label} proposes heterogeneous partial-label embedding for label noise reduction to boost typing systems. We compare two PLE models with HYENA \cite{yogatama2015embedding} and FIGER \cite{ling2012fine} as the base typing system respectively; \item \textbf{NFETC} \cite{xu2018neural} trains neural fine-grained typing system with hierarchy-aware loss. We compare the performance of the NFETC model with two different loss functions: partial-label loss and \textbf{PLL}+hierarchical loss. We denote the two variants as $\mathbf{NFETC}$ and $\mathbf{NFETC}_{hier}$ respectively; \item \textbf{NFETC-CLSC} is the proposed model in this work. We use the NFETC model as our base model, based on which we apply Compact Latent Space Clustering Regularization as described in Section \ref{sec: CSLC}; Similarly, we report results produced by using both KL-divergense-based loss ($\textbf{NFETC-}\mathbf{CLSC}$) and \textbf{KL}+hierarchical loss ($\textbf{NFETC-}\mathbf{CLSC}_{hier}$). \end{itemize} \subsection{Evaluation Settings} For evaluation metrics, we adopt strict accuracy, loose macro, and loose micro F-scores widely used in the FET task \cite{ling2012fine}. To fine tuning the hyper-parameters, we randomly sampled 10\% of the test set as a development set for both datasets. With the fine-tuned hyper-parameter as mentioned in \ref{sec:hp}, we run the model five times and report the average strict accuracy, macro F1 and micro F1 on the test set. \subsection{Hyper Parameters}\label{sec:hp} We search the hyper parameter of Ontonotes and BBN respectively via Hyperopt proposed by \cite{bergstra2013hyperopt}. Hyper parameters are shown in \textbf{Appendix A}. We optimize the model via Adam Optimizer. The full hyper parameters includes the learning rate $lr$, the dimension $d_p$ of word position embedding, the dimension $d_l$ of the mention encoder's output (equal to the dimension of the context encoder's ourput), the input dropout keep probability $p_i$ and output dropout keep probability $p_o$ for LSTM layers (in context encoder and LSTM mention encoder), the L2 regularization parameter $\lambda$, the factor of hierarchical loss normalization $\alpha$ ($\alpha>0$ means use the normalization), BN (whether using Batch normalization), the max step $S_{lp}$ of the label propagation, the max length $S_m$ of Markov chain, the influence parameter $\lambda_{clsc}$ of CLSC, the batch size $B$, the number $n$ of hidden layers in $q$ and the number $h_n$ of hidden units of the hidden layers. We implement all models using Tensorflow\footnote{The code for experiments is available at https://github. com/herbertchen1/NFETC-CLSC}. \subsection{Performance comparison and analysis} Table \ref{tb:results} shows performance comparison between the proposed CLSC model and state-of-the-art FET systems. On both benchmarks, the CLSC model achieves the best performance in all three metrics. When focusing on the comparison between \textbf{NFETC} and CLSC, we have following observation: \begin{itemize} \item Compact Latent Space Clustering shows its effectiveness on both clean data and noisy data. By applying CLSC regularization on the basic \textbf{NFETC} model, we observe consistent and significant performance boost; \item Hierarchical-aware loss shows significant advantage on the OntoNotes dataset, while showing insignificant performance boost on the BBN dataset. This is due to different distribution of labels on the test set. The proportion of terminal types of the test set is $69\%$ for the BBN dataset, while is only $33\%$ on the OntoNotes dataset. Thus, applying hierarchical-aware loss on the BBN dataset brings little improvement; \item Both algorithms are able to utilize noisy data to improve performance, so we would like to further study their performance in different noisy scenarios in following discussions. \end{itemize} \subsection{How robust are the methods to the proportion of noisy data?} \begin{figure}[!htb] \centering \subfloat{\label{Fig:R1} \hspace{-3em} \includegraphics[width=0.9\linewidth]{ontcurveclip.pdf}} \centering \quad \subfloat{\label{Fig:R2 \hspace{-3em} \includegraphics[width=0.9\linewidth]{bbncurveclip.pdf}} \centering \caption{Performance comparison between \textbf{NFETC-CLSC} and \textbf{NFETC} by removing $75\%$-$95\%$ clean data.} \label{fig:noise compare} \end{figure} By principle, with sufficient amount of clean training data, most typing systems can achieve satisfying performance. To further study the robustness of the methods to label noise, we compare their performance with the presence of $25\%, 20\%, 15\%, 10\%$ and $5\%$ clean training data and all noisy training data. Figure \ref{fig:noise compare} shows the performance curves as the proportion of clean data drops. As it reveals, the CLSC model consistently wins in the comparison. The advantage is especially clear on the BBN dataset, which offers less amount of training data. Note that, with only $27.9\%$ of training data (when only leaving $5\%$ clean data) on the BBN dataset, the CLSC model yield a comparable result with the \textbf{NFETC} model trained on full data. This comparison clearly shows the superiority of our approach in the effectiveness of utilizing noisy data. \subsection{Ablation: Do Markov Chains improve typing performance?} Table \ref{tb:l1step-ab} shows the performance of CLSC with one-step transition ($\mathcal{L}_{1-step}$) and with Markov Chains ($\mathcal{L}_{clsc}$) as described in Section \ref{sec:method-compact}. Results show that the use of Markov Chains does bring improvement to the overall performance, which is consistent with the model intuition. \section{Related Work} Named entity Recognition (NER) has been excavated for a long time \cite{collins1999unsupervised,manning2014stanford}, which classifies coarse-grained types (e.g. person, location). Recently, \cite{nagesh2018exploration,nagesh2018keep} applied ladder network \cite{rasmus2015semi} to coarse-grained entity classification in a semi-supervised learning fashion. \cite{ling2012fine} proposed Fine-Grained Entity Recognition (FET). They used distant supervision to get training corpus for FET. Embedding techniques was applied to learn feature representations since \cite{yogatama2015embedding,dong2015hybrid}. \cite{shimaoka2016attentive} introduced attention mechanism for FET to capture informative words. \cite{xin2018improving} used the TransE entity embeddings \cite{bordes2013translating} as the query vector of attention. \\ Early works ignore the out-of-context noise, \cite{gillick2014context} proposed context dependent FET and use three heuristics to clean the noisy labels with the side effect of losing training data. To utilize noisy data, \cite{ren2016afet} distinguished the loss function of noisy data from clean data via partial label loss (\textbf{PLL}). \cite{abhishek2017fine,xu2018neural} proposed variants of \textbf{PLL}, which still suffer from confirmation bias. \cite{xu2018neural} proposed hierarchical loss to handle over-specific noise. On top of \textbf{AFET}, \cite{ren2016label} proposed a method \textbf{PLE} to reduce the label noise, which lead to a great success in FET. Because label noise reduction is separated from the learning of FET, there might be error propagation problem. Recently, \cite{xin2018put} proposed utilizing a pretrained language model measures the compatibility between context and type names, and use it to repel the interference of noisy labels. However, the compatibility got by language model may not be right and type information is defined by corpus and annotation guidelines rather than type names as is mentioned in \cite{azad2018unified}. In addition, there are some work about entity-level typing which aim to figure out the types of entities in KB \cite{yaghoobzadeh2015corpus,jin2018attributed}. \begin{table} \centering \small \begin{tabular}{|l|c|} \hline & Strict Acc. \\ \hline \textbf{CLSC(c)}$(\mathcal{L}_{1-step})$ & 72.0$\pm$0.1 \\ \hline \textbf{CLSC(c)}$(\mathcal{L}_{clsc})$ & 73.0$\pm$0.3 \\ \hline \textbf{CLSC(c+n)}$(\mathcal{L}_{1-step})$ & 73.0$\pm$0.1 \\ \hline \textbf{CLSC(c+n)}$(\mathcal{L}_{clsc})$ & 74.7$\pm$0.3 \\ \hline \end{tabular} \caption{The comparison of $\mathcal{L}_{1-step}$ and $\mathcal{L}_{clsc}$ on BBN.}\label{tb:l1step-ab} \end{table} \section{Conclusion} In this paper, we propose a new method for distantly supervised fine-grained entity typing, which leverages imperfect annotations as model regularization via Compact Latent Space Clustering (CLSC). Experiments on two standard benchmarks demonstrate that our method consistently outperforms state-of-the-art models. Further study reveals our method is more robust than the former state-of-the-art approach as the portion of noisy data rises. The proposed method is general for other tasks with imperfect annotation. As a part of future investigation, we plan to apply the approach to other distantly supervised tasks, such as relation extraction. \section{Acknowledgments} This work has been supported in part by NSFC (No.61751209, U1611461), Zhejiang University-iFLYTEK Joint Research Center, Chinese Knowledge Center of Engineering Science and Technology (CKCEST), Engineering Research Center of Digital Library, Ministry of Education. Xiang Ren's research has been supported in part by National Science Foundation SMA 18-29268.
{ "timestamp": "2019-04-16T02:07:05", "yymm": "1904", "arxiv_id": "1904.06475", "language": "en", "url": "https://arxiv.org/abs/1904.06475" }
\section{Introduction} \label{sec:intro} Over the past several years, the advent of software defined networking (SDN), along with improvements in optical switching technology, has given network operators more flexibility in configuring their in-ground optical fiber into an IP network. Whereas traditionally, at network design time, each IP link was assigned a fixed optical path and bandwidth, modern SDN controllers can program colorless and directionless Reconfigurable Optical Add/Drop Multiplexers (CD ROADMs) to remap the IP topology to the optical underlay on the fly, while the network continues carrying traffic and without deploying technicians to remote sites (Figure \ref{fig:layered-architecture}) \cite{Birk2016, Choudhury2017, Choudhury2018, Tse2018}. \begin{figure} \includegraphics[width=\columnwidth]{layered-architecture2.png} \caption{Layered IP/optical architecture. The highlighted orange optical spans comprise one possible mapping of the orange IP link to the optical layer. Alternatively, the SDN controller could remap the same orange IP link to follow the black optical path.} \label{fig:layered-architecture} \end{figure} In the traditional setting, if a router failure or fiber cut causes an IP link to go down, all resources that were being used for said IP link are rendered useless. There are two viable strategies to recover from any single optical span or IP router failure. First, we could independently restore the optical and IP layers, depending on the specific failure; we could perform pure optical recovery in the case of an optical span failure or pure IP recovery in the case of an IP router failure. Note that the strategy we refer to as ``pure optical recovery'' of course involves reestablishing the IP link over the new optical path. We call it ``pure optical recovery'' because once the link has been recreated over the new optical path, the change is transparent to the IP layer. Second, we could design the network with sufficient capacity and path diversity that we can at runtime perform pure IP restoration. In practice, ISPs have used the latter strategy, as it is generally more resource efficient \cite{Chiu2001}. Now, the optical and electrical equipment can be repurposed for setting up the same IP link along a different path, or even for setting up a different IP link. In the context of failure recovery, the important upshot is that joint multilayer (IP and optical) failure recovery is now possible at runtime. The SDN controller is responsible for performing this remote reprogramming of both CD ROADMs and routers; while we generally think of SDN as operating at the network layer and above, it is now extending into the physical layer. Thus, SDN-enabled CD ROADMs shift the boundary between network design and network operation (Figure \ref{fig:design-vs-operation}). We use the term network \emph{design} to refer to any changes that happen on a human timescale, e.g., installing new routers or dispatching a crew to fix a failed link. We use network \emph{operation} to refer to changes that can happen on a smaller timescale, e.g., adjusting routing in response to switch or link failures or changing demands. \begin{figure} \includegraphics[width=\columnwidth]{design-vs-operation.png} \caption{Components of network design vs. network operation in, from left to right: traditional networks, existing studies on how best to take advantage of CD ROADMs, and this paper. The vertical dimension is timescale.} \label{fig:design-vs-operation} \end{figure} As Figure \ref{fig:design-vs-operation} shows, network design \emph{used to} comprise IP link placement. To describe what it now entails, we must provide background on IP/optical backbone architecture (Figure \ref{fig:backbone}). The limiting resources in the design of an IP backbone are the equipment housed at each IP and optical-only node. Specifically, an IP node's responsibility is to terminate optical links and convert the optical signal to an electrical signal, and to do so it needs enough \emph{tails} (\emph{tail} is shorthand for the combination of an optical transponder and a router port). An optical node must maintain the optical signal over long distances, and to do so it needs enough \emph{regenerators} or \emph{regens} for the IP links passing through it. Therefore, we precisely state the new network design problem as follows: \emph{Place tails and regens in a manner that minimizes cost while allowing the network to carry all expected traffic, even in the presence of equipment failures}. This new paradigm creates both new opportunities and challenges in the design and operation of backbone networks \cite{Chiu2007}. Previous work has explored the advantages of joint multilayer optimization over traditional IP-only optimization \cite{Birk2016, Choudhury2017, Choudhury2018, Tse2018} (e.g., see Table 1 of \cite{Choudhury2018}). However, these authors primarily resorted to heuristic optimization and restoration algorithms, due to the restrictions of routing (avoiding splitting flows into arbitrary proportions), the need for different restoration and latency guarantees for different quality-of-service classes, and the desirability of fast run times. Further complicating matters is that network components fail, and when they do a production backbone must reestablish connectivity within seconds. Tails and regens cannot be purchased or relocated at this timescale, and therefore our network design must be \emph{robust} to a set of possible failure scenarios. Importantly, we consider as \emph{failure scenarios} any single optical fiber cut or IP router failure. There are other possible causes of failure (e.g., single IP router port, ROADM, transponder, power failure), which allow for various alternative recovery techniques, but we focus on these two. Existing techniques respond efficiently to IP layer failures \cite{Chiu2007} \emph{or} optical layer failures, but ours is the first to jointly optimize over the two. Thus, we overcome three main challenges to present an exact formulation and solution to the network design problem. \begin{enumerate} \item The solution must be a single tail and regen configuration that works for all single IP router and optical fiber failures. This configuration should minimize cost under the assumption that the IP link topology will be reconfigured in response to each failure. \label{challenge1} \item The positions of regens relative to each other along the optical path determine which IP links are possible. \label{challenge2} \item The problem is computationally complex because it requires integer variables and constraints. Each tail and each regen supports a 100 Gbps IP link. Multiple tails or multiple regens can be combined at a single location to build a faster link, but they can't be split into e.g., 25 Gbps units that cost 25\% of a full element. \end{enumerate} These challenges arise because the recent shift in the boundary between network design and operation fundamentally changes the design problem; simply including link placement in network operation optimizations does not suffice to fully take advantage of CD ROADMs. A network design is optimal relative to a certain set of assumptions about what can be reconfigured at runtime. Hence, traditional network designs are only optimal \emph{under the assumption that tails and regens are fixed to their assigned IP links}. With CD ROADMs, the optimal network design must be computed \emph{under the assumption that IP links will be adjusted} in response to failures or changing traffic demands. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{terminology.png} \caption{IP/optical network terminology.} \label{fig:backbone} \end{figure*} To this end, we make three main contributions. \begin{enumerate} \item After describing the importance of jointly optimizing over the IP and optical layers in Section \ref{sec:background}, we formulate the optimal network design algorithm (Section \ref{sec:problem}). In this way we address challenges \eqref{challenge1} and \eqref{challenge2} from above. \item We present two scalable, time-efficient approximation algorithms for the network design problem, addressing the computational complexity introduced by the integer constraints (Section \ref{sec:scalable approximations}), and we explain which use cases are best suited to each of our algorithms (Section \ref{subsec:roles}). \item We evaluate our three algorithms in relation to each other and to legacy networks (Section \ref{sec:eval}). \end{enumerate} We discuss related work in Section \ref{sec:related} and conclude in Section \ref{sec:conclusion}. \section{IP/Optical Failure Recovery} \label{sec:background} In this section we provide more background IP/optical networks. We begin by defining key terms and introducing a running example (Section \ref{subsec:background terminology example}). We then use this example to discuss various failure recovery options in both traditional (Section \ref{subsec:traditional}) and CD ROADM (Section \ref{subsec:background CD ROADM}) IP/optical networks. \subsection{IP/Optical Network Architecture} \label{subsec:background terminology example} As shown in Figure \ref{fig:backbone}, an IP/optical network consists of optical fiber, the IP nodes where fibers meet, the optical nodes stationed intermittently along fiber segments, and the edge nodes that serve as the sources and destinations of traffic. We do not consider the links connecting an edge router to a core IP router as part of our design problem; we assume these are already placed and fault tolerant. Each IP node houses one or more IP \emph{routers}, each with zero or more tails, and zero or more optical regens. Each optical-only node houses zero or more optical regens but cannot contain any routers (Figure \ref{fig:backbone}). While IP and optical nodes serve as the endpoints of optical spans and segments, specific IP routers serve as the endpoints of IP links. For our purposes, an \emph{optical span} is the smallest unit describing a stretch of optical fiber; an optical span is the section of fiber between any two nodes, be they IP or optical-only. Optical-only nodes can join multiple optical spans into a single \emph{optical segment}, which is a stretch of fiber terminated at both ends by IP nodes. The path of a single optical segment may contain one or more optical-only nodes. The physical layer underlying each \emph{IP link} comprises one or more optical segments. An IP link is terminated at each end by a specific IP router and can travel over multiple optical segments if its path traverses an intermediate IP node without terminating at one of that node's routers. Figure \ref{fig:backbone} illustrates the roles of optical spans and segments and IP links. The locations of all nodes and optical spans are fixed and cannot be changed, either at design time or during network operation. An optical signal can travel only a finite distance along the fiber before it must be regenerated; every \t{\sc regen\_dist}\ miles the optical signal must pass through a regen, where it is converted from an optical signal to an electrical signal and then back to optical before being sent out the other end. The exact value of \t{\sc regen\_dist}\ varies depending on the specific optical components, but it is roughly 1000 miles for our setting of a long-distance ISP backbone with 100 Gbps technology. We use the value of $\t{\sc regen\_dist} = 1000$ miles throughout this paper. \para{Example network design problem.} The network in Figure \ref{fig:background-example} has two IP nodes, \ipnodename{1} and \ipnodename{2}, and five optical-only nodes, \optname{1}-\optname{5}. \ipnodename{1} and \ipnodename{2} each have two IP routers (\routername{1}, \routername{2} and \routername{3}, \routername{4}, respectively). Edge routers \edgename{1} and \edgename{2} are the sources and destinations of all traffic. The problem is to design the optimal IP network, requiring the fewest tails and regens, to carry 80 Gbps from \edgename{1} to \edgename{2} while surviving any single optical span or IP router failure. We do not consider failures of \edgename{1} or \edgename{2}, because failing the source or destination would render the problem trivial or impossible, respectively. If we don't need to be robust to any failures, the optimal solution is to add one 100 Gbps IP link from \routername{1} to \routername{3} over the nodes \ipnodename{1}, \optname{1}, \optname{2}, \optname{3}, and \ipnodename{2}. This solution requires one tail each at \routername{1} and \routername{3} and one regen at \optname{2}, for a total of two tails and one regen. \begin{figure} \includegraphics[width=\columnwidth]{background-example.png} \caption{Example optical network illustrating the different options for failure restoration. The number near each edge is the edge's length in miles.} \label{fig:background-example} \end{figure} \subsection{Failure Recovery in Traditional Networks} \label{subsec:traditional} In the traditional setting, the design problem is to place IP links; in this setting, once an IP link is placed at design time, its tails and regens are permanently committed to it. If one optical span or router fails, the entire IP link fails and the rest of its resources lie idle. During network operation, we may only adjust routing over the established IP links. In general, this setup allows for four possible types of failure restoration. Two of these techniques are inadequate because they cannot recover from all relevant failure scenarios (first two rows of Table \ref{tab:failure restoration}). The other two are effective but suboptimal in their resource requirements (second two rows of Table \ref{tab:failure restoration}). We describe these four approaches below, guided by the running example shown in Figure \ref{fig:background-example}. In Section \ref{subsec:background CD ROADM} we show that CD ROADMs allow for a network design that meets our problem's requirements in a more cost-effective way. \begin{table} \caption{Properties of various failure recovery approaches. The first four techniques are possible in legacy and CD ROADM networks, while the fifth requires CD ROADMs.} \label{tab:failure restoration} \begin{tabular}{l|cccc} \multicolumn{1}{c|}{\textbf{Recovery Technique}} & \thead{\# Tails} & \thead{\# Regens} & \thead{IP?} & \thead{Optical?} \\ \hline pure optical & 2 & 2 & \text{\color{red}\ding{55}} & \checkmark \\ pure IP, shortest path & 4 & 4 & \checkmark & \text{\color{red}\ding{55}} \\ pure IP, any path & 4 & 3 & \checkmark & \checkmark \\ separate IP and optical & 4 & 4 & \checkmark & \checkmark \\[0.3em] \textbf{joint IP/optical} & \textbf{4} & \textbf{2} & \checkmark & \checkmark \end{tabular} \end{table} \para{Inadequate recovery techniques.} In \emph{pure optical layer} restoration, if an optical span fails, we reroute each affected IP link over the optical network by avoiding the failed span. The rerouted path may require additional regens. In the example shown in Figure \ref{fig:background-example}, this amounts to rerouting the IP link along the alternate path \ipnodename{1}-\optname{4}-\optname{2}-\optname{5}-\ipnodename{2} whenever any optical span fails. This path requires one regen each at \optname{4} and \optname{2}. However, because the (\ipnodename{1}, \ipnodename{2}) link will never be instantiated over both paths simultaneously, the second path can reuse the original regen \optname{2}. Hence, we need only buy one extra regen at \optname{4}, for a total of two tails (at \ipnodename{1} and \ipnodename{2}) and two regens (at \optname{2} and \optname{4}). The problem with this pure optical restoration strategy is that it cannot protect against IP router failures. In \emph{pure IP layer restoration with each IP link routed along its shortest optical path}, we maintain enough fixed IP links such that during any failure condition, the surviving IP links can carry the required traffic. If any component of an IP link fails, then the entire IP link fails and even the intact components cannot be used. In large networks, this policy usually finds a feasible solution to protect against any single router or optical span failure. However, it may not be optimally cost-effective due to the restriction that IP links follow the shortest optical paths. Furthermore, in small networks it may not provide a solution that is robust to all optical span failures. If we only care about IP layer failures, the optimal strategy for our running example is to place two 100 Gbps links, one from \routername{1} to \routername{3} and a second from \routername{2} to \routername{4} and both following the optical path \ipnodename{1}-\optname{1}-\optname{2}-\optname{3}-\ipnodename{2}. Though this design is robust to the failure of any one of \routername{1}, \routername{2}, \routername{3}, and \routername{4}, it cannot protect against optical span failures. \para{Correct but suboptimal recovery techniques.} In contrast to the two failure recovery mechanisms described above, the following two techniques can correctly recover from any single IP router or optical span failure. However, neither reliably produces the least expensive network design. \emph{Pure IP layer restoration with no restriction on how IP links are routed over the optical network} is the same as IP restoration over shortest paths except IP links can be routed over any optical path. With this policy, we always find a feasible solution for all failure conditions, and it finds the most cost-effective among the possible pure-IP solutions. However, its solutions still require more tails or regens than those produced by our ILP, and solving for this case is computationally complex. In terms of Figure \ref{fig:background-example}, pure IP restoration with no restriction on IP links' optical paths entails routing the (\routername{1}, \routername{3}) IP link along the \ipnodename{1}-\optname{1}-\optname{2}-\optname{3}-\ipnodename{2} path and the (\routername{2}, \routername{4}) IP link along the \ipnodename{1}-\optname{4}-\optname{2}-\optname{5}-\ipnodename{2} path. This requires two tails plus one regen (at \optname{2}) for the first IP link and two tails plus two regens (at \optname{4} and \optname{2}) for the second IP link, for a total of four tails and three regens. The final failure recovery technique possible in legacy networks, without CD ROADMs, is \emph{pure IP layer restoration for router failures and pure optical layer restoration for optical failures.} This policy works in all cases but is usually more expensive than the two pure IP layer restorations mentioned above. In terms of our running example, we need two tails and two regens for each of two IP links, as we showed in our discussion of pure IP recovery along shortest paths. Hence, this strategy requires a total of four tails and four regens. In summary, the optimal network design with legacy technology that is robust to optical and IP failures requires four tails and three regens. \subsection{Failure Recovery in CD ROADM Networks} \label{subsec:background CD ROADM} A modern IP/optical network architecture is identical to that described in Section \ref{subsec:background terminology example} aside from the presence of an SDN controller. This single logical controller receives notifications of the changing status of any IP or optical component and also any changes in traffic demands between any pair of edge routers and uses this information compute the optimal IP link configuration and the optimal routing of traffic over these links. It then communicates the relevant link configuration instructions to the CD ROADMs and the relevant forwarding table changes to the IP routers. As in the traditional setting, we cannot add or remove edge nodes, IP nodes, optical-only nodes, or optical fiber. But, now the design problem is to decide how many tails to place on each router and how many regens to place at each IP and optical node; no longer must we commit to fixed IP links at design time. Routing remains a key component of the network design problem, though it is now joined by IP link placement. Any of the four existing failure recovery techniques is possible in a modern network. In addition, the presence of SDN-controlled CD ROADMs allows for a fifth option, joint IP/optical recovery. In contrast to the traditional setting, IP links can now be reconfigured at runtime. As above, suppose the design calls for an IP link between routers \ipnodename{1} and \ipnodename{2} over the optical path \routername{1}-\routername{2}-\routername{3}-\routername{4}. Now, these resources are \emph{not} permanently committed this IP link. If one component fails, the remaining tails and regens can be repurposed either to reroute the (\ipnodename{1}, \ipnodename{2}) link over a different optical path or to (help) establish an entirely new IP link. Returning to our running example, with joint IP/optical restoration, we can recover from any single IP or optical failure with just one IP link from \routername{1} to \routername{3}. If there is any optical link failure then this link shifts from its original shortest path, which needs a regen at O2, to the path \ipnodename{1}-\optname{4}-\optname{2}-\optname{5}-\ipnodename{2}, which needs regens at \optname{2} and \optname{4}. Importantly, the regen at \optname{2} can be reused. Hence, thus far we need two tails and two regens. To account for the possibility of \routername{1} failing, we add an extra tail at \routername{2}; if \routername{1} fails then at runtime we create an IP link from \routername{2} to \routername{3} over the path \ipnodename{1}-\optname{1}-\optname{2}-\optname{3}-\ipnodename{2}. Since this link is only active in the case that \routername{1} has failed, it will never be instantiated at the same time as the (\routername{1}, \routername{3}) link and can therefore reuse the regen we already placed at \optname{2}. Finally, to account for the possibility of \routername{3} failing, we add an extra tail at \routername{4}. This way, at runtime we can create the IP link (\routername{1}, \routername{4}) along the path \ipnodename{1}-\optname{1}-\optname{2}-\optname{3}-\ipnodename{2}. Again, only one of these IP links will ever be active at one time, so we can reuse the regen at \optname{2}. Therefore, our final joint optimization design requires four tails and two regens. Hence, even in this simple topology, compared to the most cost efficient traditional strategy, joint IP/optical optimization and failure recovery saves the cost of one regen. \subsubsection{A note on transient disruptions} As shown in Figure \ref{fig:design-vs-operation}, IP link configuration operates on the order of minutes, while routing operates on sub-second timescales. IP link configuration takes several minutes because the process entails the following three steps: \begin{enumerate} \item \label{reconfiguration1} Adding or dropping certain wavelengths at certain ROADMs; \item \label{reconfiguration2} Waiting for the network to return to a stable state; and \item \label{reconfiguration3} Ensuring that the network is indeed stable. \end{enumerate} A ``stable state'' is one in which the optical signal reaches tails at IP link endpoints with sufficient optical power to be correctly converted back into an electrical signal. Adding or dropping wavelengths at ROADMs temporarily reduces the signal's power enough to interfere with this optical-electrical conversion, thereby rendering the network temporarily unstable. Usually, the network correctly returns to a stable state within seconds of reprogramming the wavelengths (i.e., steps \eqref{reconfiguration1} and \eqref{reconfiguration2} finish within seconds). However, to ensure that the network is always operating with a stable physical layer (step \eqref{reconfiguration3}), manufacturers add a series of tests and adjustments to the reconfiguration procedure. These tests take several minutes, and therefore step \eqref{reconfiguration3} delays completion of the entire process. Researchers are currently working to bring reconfiguration latency down to the order of milliseconds \cite{Chiu2012}, similar to the timescale at which routing currently operates. However, for now we must account for a transition period of approximately two minutes when the link configuration has not yet been updated and is therefore not optimal for the new failure scenario. During this transient period, the network may not be able to deliver all the offered traffic. We mitigate this harmful traffic loss by immediately reoptimizing routing over the existing topology while the network is transitioning to its new configuration. As we show in Section \ref{subsec:transient}, by doing so we successfully deliver the vast majority of offered traffic under almost all failure scenarios. Many operational ISPs carry multiple classes of traffic, and their service level agreements (SLAs) allow them to drop some low priority traffic under failure or extreme congestion. At one large ISP, approximately 40-60\% of traffic is low priority. We always deliver at least 50\% of traffic just by rerouting. \section{Network Design Problem} \label{sec:problem} We now describe the variables and constraints of our integer linear program (ILP) for solving the network design problem. After formally stating the objective function in Section \ref{subsec:objective} we introduce the problem's constraints in \ref{subsec:tails regens} and \ref{subsec:IP links}. To avoid cluttering our presentation of the main ideas of the model, throughout \ref{subsec:objective} - \ref{subsec:IP links} we assume exactly one router per IP node. In \ref{subsec:extensions} we relax this assumption, and we also explain how to extend the model to changing traffic demands. For ease of explanation, we elide the distinction between edge nodes and IP nodes; we treat IP nodes as the ultimate sources and destinations of traffic. \subsection{Minimizing Network Cost} \label{subsec:objective} Our inputs are \begin{inparaenum}[(i)] \item the optical topology, consisting of the set \ensuremath{\mathcal{I}}\ of IP nodes, the set \ensuremath{O}\ of optical nodes, and the fiber links (annotated with distances) between them; and \item the demand matrix $D$. \end{inparaenum} We use the variable $T_{\node}$ to represent the number of tails that should be placed at router $\node$, and $R_{\ensuremath{u}}$ represents the number of regens at node \ensuremath{u}. An optical-only node can't have any tails. The capacity of an IP link $\ell = (\ensuremath{\alpha}, \ensuremath{\beta})$ is limited by the number of tails dedicated to $\ell$ at \ensuremath{\alpha}\ and \ensuremath{\beta}\ and the number of regens dedicated to $\ell$. Technically, the original signal emitted by \ensuremath{\alpha}\ is strong enough to travel \t{\sc regen\_dist}, and $\ell$ doesn't need regens there. However, for ease of explanation, we assume that $\ell$ does need regens at \ensuremath{\alpha}, regardless of its length. This requirement of regens at the beginning of each IP link is necessary only for the mathematical model and not in the actual network. We add a trivial postprocessing step to remove these regens from the final count before reporting our results. Table \ref{tab:notation} summarizes our notation. \begin{table*} \centering \caption{Notation.} \label{tab:notation} \begin{tabular}{|c|l|l|} \hline & & \multicolumn{1}{c|}{\textbf{Definition}} \\ \hline \multirow{5}{*}{\textbf{Inputs}} & \ensuremath{\mathcal{I}} & set of IP nodes \\ & \ensuremath{I} & set of IP routers \\ & \ensuremath{O} & set of optical-only nodes \\ & \ensuremath{N} & set of all nodes ($\ensuremath{N} = \ensuremath{\mathcal{I}} \cup \ensuremath{O}$) \\ & $D$ & demand matrix, where $D_{st} \in D$ gives the demand from IP node $s$ to IP node $t$ \\ & $F$ & set of all possible failure scenarios $F = \{f_1, f_2, \dots, f_n\}$ \\ & $dist_{\node\ensuremath{v} f}$ & shortest distance from optical node \node\ to optical node \ensuremath{v}\ in failure scenario $f$ \\ & $\ensuremath{O}_{\ensuremath{u} f}$ & set of all next-hops \ensuremath{v}\ with $dist_{\ensuremath{u}\ensuremath{v} f} < \t{\sc regen\_dist}$ \\ \hline \textbf{Outputs} & $T_\node$ & number of tails placed at IP router \node \\ {\small(Network Design)} & $R_\ensuremath{u}$ & total regens placed at optical node \ensuremath{u}\\[0.75em] \textbf{Outputs} & $X_{\ensuremath{\alpha}\ensuremath{\beta} f}$ & capacity of IP link $(\ensuremath{\alpha}, \ensuremath{\beta})$ in failure scenario $f$ \\ {\small(Network Operation)} & $Y_{st\ensuremath{\alpha}\ensuremath{\beta} f}$ & amount of $(s, t)$ traffic routed on IP link $(\ensuremath{\alpha}, \ensuremath{\beta})$ in failure scenario $f$ \\ \hline \textbf{Intermediate} & $R_{\ensuremath{\alpha}\ensuremath{\beta}\node\ensuremath{v} f}$ & number of regens at \node\ for optical segment $(\node, \ensuremath{v})$ of IP link $(\ensuremath{\alpha}, \ensuremath{\beta})$ in failure $f$ \\ \textbf{Values} & $R_{\ensuremath{u} f}$ & number of regens needed at optical node \ensuremath{u}\ in failure scenario $f$ \\ \hline \end{tabular} \end{table*} Our objective is to place tails and regens to minimize the ISP's equipment costs while ensuring that the network can carry all necessary traffic under all failure scenarios. Let \ensuremath{c_T}\ and \ensuremath{c_R}\ be the cost of one tail and one regen, respectively. Then the total cost of all tails is $\ensuremath{c_T} \sum_{\node \in \ensuremath{I}} T_{\node}$, the total cost of all regens is $\ensuremath{c_R} \sum_{\ensuremath{u} \in \ensuremath{O}} R_\ensuremath{u}$, and our objective is \begin{equation*} \min~~ \ensuremath{c_T} \sum_{\node \in \ensuremath{I}} T_{\node} + \ensuremath{c_R} \sum_{\ensuremath{u} \in \ensuremath{O}} R_\ensuremath{u}. \end{equation*} The stipulation that the output tail and regen placement work for all failure scenarios is crucial. Without some dynamism in the inputs, be it from a changing topology across failure scenarios or from a changing demand matrix, CD ROADMs' flexible reconfigurability would be useless. We focus on robustness to IP router and optical span failures because conversations with one large ISP indicate that failures affect network conditions more than routine demand fluctuations. Extending our model to find a placement robust to both equipment failures and changing demands should be straightforward. \subsection{Robust Placement of Tails and Regens} \label{subsec:tails regens} In traditional networks, robust design requires choosing a single IP link configuration that is optimal for all failure scenarios under the assumption that routing will depend on the specific failure state \cite{Chiu2007}. With CD ROADMs, robust network design requires choosing a single tail/regen placement that is optimal for all failure scenarios under the assumption that both routing and the IP topology will depend on the specific failure state. In either case, solving the network design problem requires solving the network operation problem as an ``inner loop''; to determine the optimal network design we need to simulate how a candidate network would operate, in terms of IP link placement and routing, in each failure scenario. At the mathematical level, CD ROADMs introduce two additional sets of decision variables to the traditional network design optimization. With the old technology, the problem is to optimize over two sets of decision variables: one set for where to place IP links and what the capacities of those links should be, and a second set for which links different volumes of traffic should traverse. In traditional network design, there is no need to explicitly model tails and regens separate from link placement, because each tail or regen is associated with exactly one IP link. Now, any given tail or regen is not associated with exactly one IP link. Thus, we must decide not only link placement and routing but also the number of tails to place at each IP node and the number of regens to place at each site. We describe these two novel aspects of our formulation in turn. \para{Constraints governing tail placement.} Our first constraint requires that the number of tails placed at any router \node\ is enough to accommodate all the IP links \node\ terminates: \begin{eqnarray} \sum_{\ensuremath{\alpha} \in \ensuremath{I}} X_{\ensuremath{\alpha} \node f} & \leq & T_{\node} \label{eq:t>=incoming} \\ \sum_{\ensuremath{\beta} \in \ensuremath{I}} X_{\node\ensuremath{\beta} f} & \leq & T_{\node} \label{eq:t>=outgoing} \\[-0.5em] & & \forall \node \in \ensuremath{I}, \forall f \in F \nonumber \end{eqnarray} As shown in Table \ref{tab:notation}, $X_{\ensuremath{\alpha}\node f}$ is the capacity of IP link $(\ensuremath{\alpha}, \node)$ in failure scenario $f$. Hence, $\sum_{\ensuremath{\alpha} \in \ensuremath{I}} X_{\ensuremath{\alpha} \node f}$ is the total incoming bandwidth terminating at router \node, and Constraint \eqref{eq:t>=incoming} says that \node\ needs at least this number of tails. Analogously, $\sum_{\ensuremath{\beta} \in \ensuremath{I}} X_{\node\ensuremath{\beta} f}$ is the total outgoing bandwidth from \node, and Constraint \eqref{eq:t>=outgoing} ensures that \node\ has enough tails for these links, too. We don't need $T_{\node}$ greater than the sum of these quantities because each tail supports a bidirectional link. \para{Constraints governing regen placement.} The second fundamental difference between our model and existing work is that we must account for relative positioning of regens both within and across failure scenarios. Because of physical limitations in the distance an optical signal can travel, no IP link can include a span longer than \t{\sc regen\_dist}\ without passing through a regenerator. As a result, the decision to place a regen at one optical location depends on the decisions we make about other locations, both within a single failure scenario and across changing network conditions. Therefore, we introduce auxiliary variables $R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f}$ to represent the number of regens to place at node \ensuremath{u}\ for the link between IP routers $(\ensuremath{\alpha}, \ensuremath{\beta})$ in failure scenario $f$ \emph{such that the next regen traversed will be at node \ensuremath{v}}. Ultimately, we want to solve for $R_\ensuremath{u}$, the number of regens to place at \ensuremath{u}, which doesn't depend on the IP link, next-hop regen, or failure scenario. But, we need the $R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f}$ variables to encode these dependencies in our constraints. We connect $R_\ensuremath{u}$ to $R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f}$ with the constraint \begin{equation} \label{eq:r>=outgoing} R_{\ensuremath{u}} \geq \sum_{\substack{\ensuremath{\alpha}, \ensuremath{\beta} \in \ensuremath{I} \\ \ensuremath{v} \in \ensuremath{O}}} R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f} ~~ \forall \ensuremath{u} \in \ensuremath{O}, \forall f \in F. \end{equation} We use four additional constraints for the $R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f}$ variables. First, we prevent some node \ensuremath{v}\ from being the next-hop regen for some node \node\ if the shortest path between \node\ and \ensuremath{v}\ exceeds \t{\sc regen\_dist}: \begin{eqnarray*} \label{eq:regens constr last} R_{\ensuremath{\alpha}\ensuremath{\beta}\node\ensuremath{v} f} & = & 0 \\ & & \forall \ensuremath{\alpha}, \ensuremath{\beta} \in \ensuremath{I}, \nonumber \\[-0.5em] & & \forall \node, \ensuremath{v}~\text{such that}~ dist_{\node\ensuremath{v} f} > \t{\sc regen\_dist}. \nonumber \end{eqnarray*} Second, we ensure that the set of regens assigned to an IP link indeed forms a contiguous path. That is, for all nodes $u$ aside from those housing the source and destination routers, the number of regens assigned to $u$ equals the number of regens for which $u$ is the next-hop: \begin{eqnarray*} \sum_{v\in\ensuremath{N}} R_{\ensuremath{\alpha}\ensuremath{\beta} uvf} & = & \sum_{v\in\ensuremath{N}} R_{\ensuremath{\alpha}\ensuremath{\beta} vuf} \\ & & \forall u \in \ensuremath{N}, \forall \ensuremath{\alpha}, \ensuremath{\beta} \in \ensuremath{I}, \forall f \in F. \nonumber \end{eqnarray*} We need sufficient regens at the source IP router's node \ensuremath{a}, and sufficient regens with the destination IP router's node \ensuremath{b}\ as their next-hop, for each IP link \begin{eqnarray*} \label{eq:regens constr first} \sum_{\ensuremath{u} \in \ensuremath{N}} R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{a}\ensuremath{u} f} & \geq & X_{\ensuremath{\alpha}\ensuremath{\beta} f} \\ \sum_{\ensuremath{u} \in \ensuremath{N}} R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{b} f} & \geq & X_{\ensuremath{\alpha}\ensuremath{\beta} f} \\[-0.5em] & & \forall \ensuremath{\alpha}, \ensuremath{\beta} \in \ensuremath{I}, \forall f \in F \nonumber; \end{eqnarray*} But, \ensuremath{b}\ can't have any regens, and \ensuremath{a}\ can't be the next-hop location for any regens \begin{equation*} \label{eq:no forwards backwards} R_{\alpha\beta u\ensuremath{a} f} = R_{\alpha\beta\ensuremath{b} uf} = 0 \end{equation*} \hfill$\forall u \in \ensuremath{N}, \forall \alpha, \beta \in \ensuremath{I}, \forall f \in F.$ \para{Additional practical constraints.} We have two practical constraints which are not fundamental to the general problem but are artifacts of the current state of routing technology. First, ISPs build IP links in bandwidths that are multiples of 100 Gbps. We encode this policy by requiring $X_{\ensuremath{\alpha}\ensuremath{\beta} f}$, $T_{\node}$, and $R_\ensuremath{u}$ to be integers and converting our demand matrix into 100 Gbps units. Second, current IP and optical equipment require each IP link to have equal capacity to its opposite direction. With these constraints, only one of \eqref{eq:t>=incoming} and \eqref{eq:t>=outgoing} is necessary. Finally, we require all variables to take on nonnegative values. \subsection{Dynamic Placement of IP Links} \label{subsec:IP links} Thus far, we have described constraints ensuring that each IP link has enough tails and regens. But, we have not discussed IP link placement or routing. Although link placement and routing \emph{themselves} are part of network operation rather than network design, they play central roles as \emph{parts} of the network design problem. How many are ``enough'' tails and regens for each IP link depends on the link's capacity, and the link's capacity depends on how much traffic it must carry. Therefore, the network operation problem is a subproblem of our network design optimization. These constraints are the well-known multicommodity flow (MCF) constraints requiring \begin{inparaenum}[(a)] \item flow conservation; \item that all demands are sent and received; and \item that the traffic assigned to a particular IP link cannot exceed the link's capacity. \end{inparaenum} $Y_{st\ensuremath{\alpha}\ensuremath{\beta} f}$ gives the amount of $(s, t)$ traffic routed on IP link $(\ensuremath{\alpha}, \ensuremath{\beta})$ in failure scenario $f$. Hence, we express these constraints with the following equations: \begin{align} \label{eq:operation first} \displaystyle\sum_{\ensuremath{\alpha} \in \ensuremath{I}} Y_{stuvf} &= \displaystyle\sum_{u \in \ensuremath{I}} Y_{stvuf} & \forall (s, t) \in D, \\[-0.8em] && \forall v \in \ensuremath{I} - \{s, t\}, \forall f \in F \nonumber \\ \displaystyle\sum_{u \in \ensuremath{I}} Y_{stsuf} &= \displaystyle\sum_{u \in \ensuremath{I}} Y_{stutf} \\ &= D_{st} & \forall s, t \in D, \forall f \in F \nonumber \\ \displaystyle\sum_{(s, t) \in D} Y_{stuvf} &\leq X_{uvf} & \forall u, v \in \ensuremath{I}, \forall f \in F. \label{eq:operation last} \end{align} As before, $X_{uvf}$ in Constraint \eqref{eq:operation last} is the capacity of IP link $(u, v)$ in failure scenario $f$. \para{Network design and operation in practice.} Once the network has been designed, we solve the network operation problem for whichever failure scenario represents the current state of the network by replacing variables $T_{\node}$ and $R_{\ensuremath{u}}$ with their assigned values. \subsection{Extensions to a Wider Variety of Settings} \label{subsec:extensions} We now describe how to relax the assumptions we've made throughout Sections \ref{subsec:objective} - \ref{subsec:IP links} that \begin{inparaenum}[(a)] \item each IP node houses exactly one IP router; and \item traffic demands are constant. \end{inparaenum} \para{Accounting for multiple routers colocated at a single IP node.} If we assume that IP links connecting routers colocated within the same IP node always have the same cost as (short) external IP links (i.e., they require one tail at each router endpoint), then our model already allows for any number of IP routers at each IP node; if this assumption holds, then we simply treat colocated routers as if they were housed in nearby nodes e.g., one mile apart. However, in general this assumption is not valid, because intra-IP-node links require one port per router, rather than a full tail (combination router port and optical transponder) at each end. Hence, intra IP node links are cheaper than even the shortest external links. To accurately model costs we must account for them explicitly. To do so, we add the stipulation to all the constraints presented above that, whenever one constraint involves two IP routers, these IP routers cannot be colocated. Then, we add the following: Let $U$ be the set of IP routers containing $u$ and any other routers $u'$ collocated at the same IP node with $u$. Let $P_u$ be the number of ports placed at $u$ for intra-node links. Let \ensuremath{c_{P}}\ be the cost of one 100 Gbps port. Our objective function now becomes \begin{equation*} \min~~ \ensuremath{c_T} \sum_{\node \in \ensuremath{I}} T_{\node} + \ensuremath{c_R} \sum_{\ensuremath{u} \in \ensuremath{O}} R_\ensuremath{u} + \ensuremath{c_{P}} \sum_{\node\in\ensuremath{I}} P_{\node}. \end{equation*} Ultimately, we want to constrain the traffic traveling between $u$ and any $u'$ to fit within the intra-node links, as follows (c.f. Constraint \eqref{eq:operation last}). \begin{equation*} \sum_{(s, t) \in D} Y_{stuu'f} \leq X_{uu'f} \forall u, u' \in U, \forall U \in \ensuremath{\mathcal{I}}, \forall f \in F. \end{equation*} But, no $X_{uu'f}$ appear in the objective function; the links themselves have no defined cost. Hence, we add constraints to limit the capacity of the links to the number of ports $P_u$. Specifically, we use the analogs of \eqref{eq:t>=incoming} and \eqref{eq:t>=outgoing} to describe the relationship between ports $P_u$ placed at $u$ (c.f. tails placed at $u$) and the intra-node links starting from (c.f. $X_{u\beta f}$ external IP links) and ending at (c.f. $X_{\alpha uf}$ external IP links) $u$. \begin{eqnarray*} \sum_{u' \in U} X_{u'uf} & \leq & P_{u} \label{eq:intra>=incoming} \\ \sum_{u' \in U} X_{uu'f} & \leq & P_{u} \label{eq:intra>=outgoing} \\[-0.5em] & & \forall U \in \ensuremath{\mathcal{I}}, \forall u \in U, \forall f \in F \nonumber \end{eqnarray*} \para{Accounting for changing traffic.} Thus far, we have described our model to accommodate changing failure conditions over time with a single traffic matrix. In reality, traffic shifts as well. Adding this to the mathematical formulation is trivial. Wherever we currently consider all failure scenarios $f \in F$, we need only consider all $(\text{failure}, \text{traffic matrix})$ pairs. Unfortunately, while this change is straightforward from a mathematical perspective, it is computationally costly. The number of failure scenarios is a multiplicative factor on the model's complexity. If we extend it to consider multiple traffic matrices, the number of different traffic matrices serves as an additional multiplier. \section{Scalable Approximations} \label{sec:scalable approximations} In theory, the network design algorithm presented above finds the optimal solution. We will call this approach {\sf\smaller Optimal}. However, {\sf\smaller Optimal}\ does not scale, even to networks of moderate size ($\sim 20$ IP nodes). To address this issue, we introduce two approximations, {\sf\smaller Simple}\ and {\sf\smaller Greedy}. {\sf\smaller Optimal}\ is unscalable because, as network size increases, not only does the problem for any given failure scenario become more complex, but the number of failure scenarios also increases. In a network with $\ell$ optical spans, $n$ IP nodes, and $d$ separate demands, the total number of variables and constraints in {\sf\smaller Optimal}\ is a monotonically increasing function $g(\ell, n, d)$ of the size of the network and demand matrix, multiplied by the number of failure scenarios, $\ell + n$. Thus, increasing network size has a multiplicative effect on {\sf\smaller Optimal}'s complexity. The key to {\sf\smaller Simple}\ and {\sf\smaller Greedy}\ is to decouple the two factors. \subsection{{\sf\smaller Simple}\ Parallelizing of Failure Scenarios} In {\sf\smaller Simple}, we solve the placement problem separately for each failure condition. That is, if {\sf\smaller Optimal}\ jointly considers failure scenarios labeled $F = \{1, 2, 3\}$, then {\sf\smaller Simple}\ solves one optimization for $F = \{1\}$, another for $F = \{2\}$, and a third for $F = \{3\}$. The final number of tails and regens required at each site is the maximum required over all scenarios. Each of the $\ell + n$ optimizations is exactly as described in Section \ref{sec:problem}; the only difference is the definition of $F$. Hence, each optimization has $g(\ell, n, d)$ variables and constraints. The problems are independent of each other, and therefore we can solve for all failure scenarios in parallel. As network size increases, we only pay for the increase in $g(\ell, n, d)$, without an extra multiplicative penalty for an increasing number of failure scenarios. \subsection{{\sf\smaller Greedy}\ Sequencing of Failure Scenarios} {\sf\smaller Greedy}\ is similar to {\sf\smaller Simple}, except we solve for the separate failure scenarios in sequence, taking into account where tails and regens have been placed in previous iterations. In {\sf\smaller Simple}, the $\ell + n$ optimizations are completely independent, which is ideal from a time efficiency perspective. However, one drawback is that {\sf\smaller Simple}\ misses some opportunities to share tails and regens across failure scenarios. Often, the algorithm is indifferent between placing tails at router $a$ or router $b$, so it arbitrarily chooses one. {\sf\smaller Simple}\ might happen to choose $a$ for Failure 1 and $b$ for Failure 2, thereby producing a final solution with tails at both. In contrast, {\sf\smaller Greedy}\ knows when solving for Failure 2 that tails have already been placed at $a$ in the solution to Failure 1. Thus, {\sf\smaller Greedy}\ knows that a better \emph{overall} solution is to reuse these, rather than place additional tails at $b$. Mathematically, {\sf\smaller Greedy}\ is like {\sf\smaller Simple}\ in that it requires solving $|F|$ separate optimizations, each considering one failure scenario. But, letting $\ensuremath{T'}_{\node}$ represent the number of tails already placed at $\node$, we replace Constraints \eqref{eq:t>=incoming} and \eqref{eq:t>=outgoing} with the following. \begin{eqnarray} \sum_{\ensuremath{\alpha} \in \ensuremath{I}} X_{\ensuremath{\alpha} \node f} & \leq & T_{\node} + \ensuremath{T'}_{\node} \label{eq:t+tap>=incoming} \\ \sum_{\ensuremath{\beta} \in \ensuremath{I}} X_{\node\ensuremath{\beta} f} & \leq & T_{\node} + \ensuremath{T'}_{\node} \label{eq:t+tap>=outgoing} \\[-0.5em] & & \forall \ensuremath{u} \in \ensuremath{I}, \forall f \in F \nonumber \end{eqnarray} In \eqref{eq:t+tap>=incoming} and \eqref{eq:t+tap>=outgoing}, $T_{\node}$ represents the number of new tails to place at router \node, not counting the $T'_{\node}$ already placed. Similarly, with $\ensuremath{R'}_{\ensuremath{u}}$ defined as the number of regens already placed at $\ensuremath{u}$ and $R_{\ensuremath{u}}$ as the new regens to place, Constraint \eqref{eq:r>=outgoing} becomes \begin{equation*} \label{eq:r+rap>=outgoing} R_{\ensuremath{u}} + \ensuremath{R'}_{\ensuremath{u}} \geq \sum_{\substack{\ensuremath{\alpha}, \ensuremath{\beta} \in \ensuremath{I} \\ \ensuremath{v} \in \ensuremath{O}}} R_{\ensuremath{\alpha}\ensuremath{\beta}\ensuremath{u}\ensuremath{v} f} ~~ \forall \ensuremath{u} \in \ensuremath{O}, \forall f \in F. \end{equation*} We always solve the no failure scenario first, as a baseline. After that, we find that the order of the remaining failure scenarios does not matter much. With {\sf\smaller Greedy}, we solve for the $\ell + n$ failure scenarios in sequence, but each problem has only $g(\ell, n, d)$ variables and constraints. The number of failure scenarios is now an additive factor, rather than a multiplicative one in {\sf\smaller Optimal}\ or absent in {\sf\smaller Simple}. \subsection{Roles of {\sf\smaller Simple}, {\sf\smaller Greedy}, and {\sf\smaller Optimal}} \label{subsec:roles} As we will show in Section \ref{sec:eval}, {\sf\smaller Greedy}\ finds nearly equivalent-cost solutions to {\sf\smaller Optimal}\ in a fraction of the time. {\sf\smaller Simple}\ universally performs worse than both. We introduce {\sf\smaller Simple}\ for theoretical completeness, though due to its poor performance we don't recommend it in practice; {\sf\smaller Simple}\ and {\sf\smaller Optimal}\ represent the two extremes of the spectrum of joint optimization across failure scenarios, and {\sf\smaller Greedy}\ falls in between. We see both {\sf\smaller Optimal}\ and {\sf\smaller Greedy}\ as useful and complementary tools for network design, with each algorithm best suited to its own set of use cases. {\sf\smaller Optimal}\ helps us understand exactly how our constraints regarding tails, regens, and demands interact and affect the final solution. It is best used on a scaled-down, simplified network to \begin{inparaenum}[(a)] \item answer questions such as \emph{How do changes in the relative costs of tails and regens affect the final solution?}; and \item serve as a baseline for {\sf\smaller Greedy}. \end{inparaenum} Without {\sf\smaller Optimal}, we wouldn't know how close {\sf\smaller Greedy}\ comes to finding the optimal solution. Hence, we might fruitlessly continue searching for a better heuristic. Once we demonstrate that {\sf\smaller Optimal}\ and {\sf\smaller Greedy}\ find comparable solutions on topologies that both can solve, we have confidence that {\sf\smaller Greedy}\ will do a good job on networks too large for {\sf\smaller Optimal}. In contrast, {\sf\smaller Greedy}'s time efficiency makes it ideally suited to place tails and regens for the full-sized network. In addition, {\sf\smaller Greedy}\ directly models the process of incrementally upgrading an existing network. The foundation of {\sf\smaller Greedy}\ is to take some tails and regens as fixed and to optimize the placement of additional equipment to meet the constraints. When we explained {\sf\smaller Greedy}, we described these already placed tails and regens as resulting from previously considered failure scenarios. But, they can just as well have previously existed in the network. \section{Evaluation} \label{sec:eval} First, we show that CD ROADMs indeed offer savings compared to the existing, fixed IP link technology by showing that all of {\sf\smaller Simple}, {\sf\smaller Greedy}, and {\sf\smaller Optimal}\ outperform current best practices in network design. Then we compare these three algorithms in terms of quality of solutions and scalability. We show that {\sf\smaller Greedy}\ achieves similar results to {\sf\smaller Optimal}\ in less time. Finally, we show that our algorithms should allow ISPs to meet their SLAs even during the transient period following a failure before the network has had time to transition to the new optimal IP link configuration. \subsection{Experiment Setup} \para{Topology and traffic matrix.} Figure \ref{fig:Topology-9node} shows the topology used for our experiments, which is representative of the core of a backbone network of a large ISP. The network shown in Figure \ref{fig:Topology-9node} has nine edge switches, which are the sources and destinations of all traffic demands. Each edge switch is connected to two IP routers, which are colocated within one central office and share a single optical connection to the outside world. The network has an additional 16 optical-only nodes, which serve as possible regen locations. To isolate the benefits of our approach to minimizing tails and regens, respectively, we create two versions of the topology in Figure \ref{fig:Topology-9node}. The first, which we call {\sf\smaller 9node-450}, assigns a distance of 450 miles to each optical span. In this topology neighboring IP routers are only 900 miles apart, so an IP link between them doesn't need a regen. The second version, {\sf\smaller 9node-600}, assigns a distance of 600 miles to each optical span. In this topology regens are required for any IP link. To evaluate our optimizations on networks of various sizes, we also look at a topology consisting of just the upper left corner of Figure \ref{fig:Topology-9node} (above the horizontal thick dashed line and to the left of the verticle thick dashed line). We refer to the 450 mile version of this topology as {\sf\smaller 4node-450}\ and the 600 mile version as {\sf\smaller 4node-600}. Second, we look at the upper two-thirds (above the thick dashed line) with optical spans of 450 miles ({\sf\smaller 6node-450}) and 600 miles ({\sf\smaller 6node-600}). Finally, we consider the entire topology ({\sf\smaller 9node-450}\ and {\sf\smaller 9node-600}). \begin{figure} \includegraphics[width=\columnwidth]{Topology-9node.png} \caption{Topology used for experiments. We call the full network {\sf\smaller 9node-450}/{\sf\smaller 9node-600}, the upper two-thirds (above the thick dashed line) {\sf\smaller 6node-450}/{\sf\smaller 6node-600}, and the upper left corner {\sf\smaller 4node-450}/{\sf\smaller 4node-600}.} \label{fig:Topology-9node} \end{figure} For each topology, we use a traffic matrix in which each edge router sends 440 GB/sec to each other edge router. In our experiments we assume costs of 1 unit for each tail and 1 unit for each regen, while communication between colocated routers is free. We use Gurobi version 8 to solve our linear programs. \para{Alternative strategy.} We compare {\sf\smaller Optimal}, {\sf\smaller Greedy}, and {\sf\smaller Simple}\ to {\sf\smaller Legacy}, the method currently used by ISPs to construct their networks. Once built, an IP link is fixed, and if any component fails, the link is down and all other components previously dedicated to it are unusable. In our {\sf\smaller Legacy}\ algorithm, we assume that IP links follow the shortest optical path. Similar to {\sf\smaller Greedy}, we begin by computing the optimal IP topology for the no failure case. We then designate those links as already paid for and solve the first failure case under the condition that reusing any of these links is ``free.'' We add any additional links placed in this iteration to the already-placed collection and repeat this process for all failure scenarios. {\sf\smaller Legacy}\ is the pure IP layer optimization and failure restoration described in Section \ref{sec:background}. As described previously, we need not compare our approaches to pure optical restoration, because pure optical restoration cannot recover from IP router failures. We need not compare against independent optical and IP restoration, because this technique generally performs worse than pure-IP or IP-along-disjoint-paths. We compare against IP-along-shortest-paths, rather than IP-along-disjoint-paths, for two reasons. First, the main drawback of IP-along-shortest-paths is that, in general, it does not guarantee recovery from optical span failure. However, on our example topologies {\sf\smaller Legacy}\ \emph{can} handle any optical failure. Second, the formulation of the rigourous IP-along-djsoint-paths optimization is nearly as complex as the formulation of {\sf\smaller Optimal}; if we remove the restriction that IP links must follow shortest paths, then we need constraints like those described in Section \ref{subsec:tails regens} to place regens every 1000 miles along a link's path. For this reason, ISPs generally do not formulate and solve the rigorous IP-along-disjoint-paths optimization. Instead, they hand-place IP links according to heuristics and historical precedent. We don't use this approach because it is too subjective and not scientifically replicable. In summary, IP-along-shortest-paths strikes the appropriate balance among \begin{inparaenum}[(a)] \item effectiveness at finding close to the optimal solution possible with traditional technology; \item realism; \item simplicity for our implementation and explanation; and \item simplicity for the reader's understanding and ability to replicate. \end{inparaenum} \begin{figure*} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.8\textwidth]{opt-leg-cost450-gray.png} \caption{Neighboring optical nodes 450 miles apart.} \label{subfig:opt-leg-cost450} \end{subfigure}\begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{opt-leg-cost600-gray.png} \caption{Neighboring optical nodes 600 miles apart.} \label{subfig:opt-leg-cost600} \end{subfigure} \caption{Total cost (tails + regens) by topology for {\sf\smaller Optimal}\ and {\sf\smaller Legacy}. {\sf\smaller Optimal}\ outperforms {\sf\smaller Legacy}\ on all topologies, and the gap is greatest on the largest network.} \label{fig:cost} \end{figure*} \subsection{Benefits of CD ROADMs} \label{subsec:CD ROADM benefits} To justify the utility of CD ROADM technology, we show that building an optimal CD ROADM network offers up to 29\% savings compared to building a legacy network. Since neither approach requires any regens on the 450 mile networks, all those savings come from tails. On {\sf\smaller 4node-600}\ {\sf\smaller Optimal}\ requires 15\% fewer tails and 38\% fewer regens. On {\sf\smaller 6node-600}\ we achieve even greater savings, using 20\% fewer tails and 44\% fewer regens. On {\sf\smaller 9node-600}\ {\sf\smaller Optimal}\ uses 16\% \emph{more} tails than {\sf\smaller Legacy}\ but more than compensates by requiring 55\% fewer regens, for an overall savings of 23\%. The bars in Figures \ref{fig:cost} illustrate the differences in total cost. Comparing Figures \ref{subfig:opt-leg-cost450} and \ref{subfig:opt-leg-cost600}, we see that {\sf\smaller Optimal}\ offers greater savings compared to {\sf\smaller Legacy}\ on the 600 mile networks. This is because regens, moreso than tails, present opportunities for reuse across failure scenarios. {\sf\smaller Optimal}\ capitalizes on this opportunity while {\sf\smaller Legacy}\ doesn't; both algorithms find solutions with close to the theoretical lower bound in tails, but {\sf\smaller Legacy}\ in general is inefficient with regen placement. Since no regens are necessary for the 450 mile topologies, this benefit of {\sf\smaller Optimal}\ compared to {\sf\smaller Legacy}\ only manifests itself on the 600 mile networks. In these experiments we allow up to five minutes per failure scenario for {\sf\smaller Legacy}\ and the equivalent total time for {\sf\smaller Optimal}\ (i.e., 300 sec $\times$ 21 failure scenarios = 6300 sec for {\sf\smaller 4node-450}\ and {\sf\smaller 4node-600}, 300 sec $\times$ 35 failures = 10500 sec for {\sf\smaller 6node-450}\ and {\sf\smaller 6node-600}\ and $300 \times 59 = 17700$ sec for {\sf\smaller 9node-450}\ and {\sf\smaller 9node-600}). \subsection{Scalability Benefits of {\sf\smaller Greedy}} \label{subsec:greedy scalability benefits} As Figure \ref{fig:timing} shows, {\sf\smaller Greedy}\ outperforms {\sf\smaller Optimal}\ when both are limited to a short amount of time. ``Short'' here is relative to topology; Figure \ref{fig:timing} illustrates that the crossover point is around 1200 seconds for {\sf\smaller 4node-600}. In contrast, both {\sf\smaller Greedy}\ and {\sf\smaller Optimal}\ always outperform {\sf\smaller Simple}, even at the shortest time limits. The design {\sf\smaller Greedy}\ produces costs at most 1.3\% more than the design generated by {\sf\smaller Optimal}, while {\sf\smaller Simple}'s design costs up to 12.4\% more than that of {\sf\smaller Optimal}\ and 11.0\% more than that of {\sf\smaller Greedy}. Reported times for these experiments do \emph{not} parallelize {\sf\smaller Simple}'s failure scenarios; we show the summed total time. In addition, the times for {\sf\smaller Greedy}\ and {\sf\smaller Simple}\ are an upper bound. We set a time limit of $t$ seconds for each of $|F|$ failure scenario, and we plot each algorithm's objective value at $t|F|$. \begin{figure}\includegraphics[width=\columnwidth]{timing-big4-600-labeled.png} \caption{Total cost by computation time for {\sf\smaller Simple}, {\sf\smaller Greedy}, and {\sf\smaller Optimal}\ on {\sf\smaller 4node-600}. Lines do not start at $t = 0$ because Gurobi requires some amount of time to find any feasible solution.} \label{fig:timing} \end{figure} Interestingly, the objective values of {\sf\smaller Simple}\, for this topology, and {\sf\smaller Greedy}\ for some others, do not monotonically decrease with increasing time. We suspect this is because their solutions for failure scenario $i$ depend on their solutions to all previous failures. Suppose that, on failure $i - j$, Gurobi finds a solution $s$ of cost $c$ after 60 seconds. If given 100 seconds per failure scenario, Gurobi might use the extra time to pivot from the particular solution $s$ to an equivalent cost solution $s'$, in an endeavor to find a configuration with an objective value less than $c$ on this particular iteration. Since both $s$ and $s'$ give a cost of $c$ for iteration $i - j$, Gurobi has no problem returning $s'$. But, it's possible that $s'$ ultimately leads to a slightly worse overall solution than $s$. As Figure \ref{fig:timing} shows, these differences are at most 10 tails and regens, and they occur only at the lowest time limits. \subsection{Behavior During IP Link Reconfiguration} \label{subsec:transient} In the previous two subsections, we evaluate the steady-state performance of {\sf\smaller Optimal}, along with {\sf\smaller Greedy}, {\sf\smaller Simple}, and {\sf\smaller Legacy}, after the network has had time to transition both routing and the IP link configuration to their new optimal settings based on the current failure scenario. However, as we describe in Section \ref{subsec:background CD ROADM}, there exists a period of approximately two minutes during which routing has already adapted to the new network conditions but IP links have not yet finished reconfiguration. In this section we show that our approach gracefully handles this transient period, as well. The fundamental difference between these experiments and those in Sections \ref{subsec:CD ROADM benefits} and \ref{subsec:greedy scalability benefits} is that here we disallow IP link reconfiguration. Whereas in Sections \ref{subsec:CD ROADM benefits} and \ref{subsec:greedy scalability benefits} we jointly optimize both IP link configuration and routing in response to each failure scenario, we now reoptimize only routing; for each failure scenario we restrict ourselves to the links that were both already established in the no-failure case and have not been brought down by said failure. Specifically, in these experiments we begin with the no-failure IP link configuration as determined by {\sf\smaller Optimal}. Then, one-by-one we consider each failure scenario, noting the fraction of offered traffic we can carry on this topology simply by switching from {\sf\smaller Optimal}'s no-failure routing to whatever is now the best setup given the failure under consideration. Figure \ref{fig:transient} shows our results. The graphs are CDFs illustrating the fraction of failure scenarios indicated on the $y$-axis for which we can deliver at least the fraction of traffic denoted by the $x$-axis. For example, the red point at $(0.85, 50\%)$ in Figure \ref{subfig:opt-transient450} indicates that in 50\% of the 59 failure scenarios under consideration for {\sf\smaller 9node-450}, we can deliver at least 85\% of offered traffic just by reoptimizing routing. The blue line in Figure \ref{subfig:opt-transient450} represents the results of taking the 21 failure scenarios of {\sf\smaller 4node-450}\ in turn, and for each recording the fraction of offered traffic routed. The blue line in Figure \ref{subfig:opt-transient600} shows the same for the 21 failure scnarios of {\sf\smaller 4node-600}, while the orange lines show the 35 failure scenarios for {\sf\smaller 6node-450}\ and {\sf\smaller 6node-600}, and the red lines show the 59 failure scenarios for the large topologies. We find two key takeaways from Figure \ref{fig:transient}. First, across all six topologies we always deliver at least 50\% of traffic. Second, our results improve as the number of nodes in the network increases, and we do better on the topologies requiring regens than on those that don't. On {\sf\smaller 9node-600}, we're always able to route at least 80\% of traffic. Generally, ISPs' SLAs require them to always deliver all high priority traffic, which typically represents about 40-60\% of total load. However, in the presence of failures or extreme congestion they're allowed to drop low priority traffic. Since most operational backbones are larger even than our {\sf\smaller 9node-600}\ topology, our results suggest that our algorithms should always allow ISPs to meet their SLAs. Note that we don't expect to be able to route 100\% of offered traffic in all failure scenarios without reconfiguring IP links; if we could there would be little reason to go through the reconfiguration process at all. But, we already saw in Section \ref{subsec:CD ROADM benefits} that remapping the IP topology to the optical underlay adds significant value. \begin{figure*} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.8\textwidth]{opt-transient450-labeled.png} \caption{Neighboring optical nodes 450 miles apart.} \label{subfig:opt-transient450} \end{subfigure}\begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{opt-transient600-labeled.png} \caption{Neighboring optical nodes 600 miles apart.} \label{subfig:opt-transient600} \end{subfigure} \caption{Percentage of failure scenarios for which rerouting over the existing IP links allows delivery of at least the indicated fraction of offered traffic.} \label{fig:transient} \end{figure*} \section{Related Work} \label{sec:related} Though there has been significant work on various aspects of IP/optical networks, no existing research addresses the joint optimization of IP and optical network design. At a high level, the Owan work by Jin et al. \cite{Jin2016} is similar to ours. Like our work, Owan is a centralized system that jointly optimizes the IP and optical topologies and configures network devices, including CD ROADMs, according to this global strategy. However, there are three key differences between Owan and our work. First, our objective differs from that of Jin et al. We aim to minimize the cost of tails and regens such that we place the equipment such that, under all failure scenarios, we can set up IP links to carry all necessary traffic. Jin et al. aim to minimize the transfer completion time or maximize the number of transfers that meet their deadlines. Second, our work applies in a different setting. Owan is designed for bulk transfers and depends on the network operator being able to control sending rates, possibly delaying traffic for several hours. We target all ISP traffic; we can't rate control any traffic, and we must route all demands, even in the case of failures, except during a brief transient period during IP link reconfiguration. Third, we make different assumptions about what parts of the infrastructure are given and fixed. Jin et al. take the locations of optical equipment as an input constraint, while we solve for the optimal places to put tails and regens. This distinction is crucial; Jin et al. don't need any notion of here-and-now decisions about where to place tails and regens separate from wait-and-see decisions about IP link configuration and routing. Other studies demonstrate that, to minimize delay, it is best to set up direct IP links between endpoints exchanging significant amounts of traffic, while relying on packet switching through multiple hops to handle lower demands \cite{Brzezinski2005}. Choudhury \cite{Choudhury2018} and Jin \cite{Jin2016} consider joint IP/optical optimization but use heuristic algorithms. \section{Conclusion} \label{sec:conclusion} Advances in optical technology and SDN have decoupled IP links from their underlying infrastructure (tails and regens). We have precisely stated and solved the new network design problem deriving from these advances, and we have also presented a fast approximation algorithm that comes very close to the optimal solution. \section*{Acknowledgment} The authors would like to thank Mina Tahmasbi Arashloo for her discussions about the regen constraints and Manya Ghobadi, Xin Jin, and Sanjay Rao for their feedback on drafts. \bibliographystyle{IEEEtran}
{ "timestamp": "2019-04-16T02:11:45", "yymm": "1904", "arxiv_id": "1904.06574", "language": "en", "url": "https://arxiv.org/abs/1904.06574" }
\section{Equations} The compressible Euler system for barotropic flow in $\mathbb{R}^n_{\bf x}$ is given by \begin{align} \rho_t+\dv_{\bf x}(\rho \bf u)&=0 \label{mass_m_d_isthrml_eul}\\ (\rho{\bf u})_t+\dv_x[\rho {\bf u}\otimes{\bf u}]+\grad_{\bf x} p&=0,\label{mom_m_d_isthrml_eul} \end{align} where the independent variables are time $t$ and position ${\bf x}$, and the primary dependent variables are density $\rho$ and velocity ${\bf u}$, while pressure is a given function of density, $p=p(\rho)$. In {\em isothermal} flow of an ideal, polytropic gas the pressure is a linear function of density: \begin{equation}\label{pressure} p(\rho)=a^2\rho\qquad\qquad\text{($a>0$ constant)}. \end{equation} For {\em radial} ($\equiv$ spherically symmetric) solutions the dependent variables are functions of time $t$ and radial distance $r=|{\bf x}|$ to the origin, and the velocity field is purely radial: ${\bf u}=u\frac{\bf x}{r}$. In this case \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} reduces to a quasi one-dimensional system: \begin{align} \left(r^m\rho\right)_t+\left(r^m\rho u\right)_r &= 0\label{mass}\\ \left(r^m\rho u \right)_t+\left(r^m(\rho u^2+p)\right)_r &= mr^{m-1}p,\label{momentum} \end{align} where $m=n-1$. For smooth (Lipschitz) flows this reduces further to \begin{align} \rho_t+u\rho_r+\rho\left(u_r+\frac{mu}{r}\right) &= 0\label{m_eul}\\ u_t+ uu_r +\frac{p_r}{\rho}&= 0.\label{mom_eul} \end{align} In this work we shall be concerned exclusively with complete radial isothermal flows of {\em similarity type}. This means $\rho(t,r)$ and $u(t,r)$ are defined for all $t\in\mathbb{R}$, $r>0$, and are of the form \begin{equation}\label{fncl_relns} \rho(t,r)=\sgn(t)|t|^\beta\Omega(\xi)\,,\qquad u(t,r)=U(\xi), \end{equation} where the {\em similarity variable} $\xi$ is given by \[\xi=\frac{r}{t}.\] A discussion of our results and their relations to earlier works appears in Section \ref{conv_div_isothermal}. At this stage $\beta\in\mathbb{R}$ in \eq{fncl_relns} is a free parameter. Substitution of \eq{pressure} and \eq{fncl_relns} into \eq{m_eul}-\eq{mom_eul} yields the similarity ODEs (where ${}'\equiv \frac{d}{d\xi}$) \begin{align} (U-\xi)\frac{\Omega'}{\Omega}+U'+\Big(\beta+\frac{mU}{\xi}\Big)&=0 \label{isoth_omega_ode}\\ a^2\frac{\Omega'}{\Omega}+(U-\xi)U'&=0. \label{isoth_u_ode} \end{align} Solving for $\frac{\Omega'}{\Omega}$ in \eq{isoth_u_ode} and substituting into \eq{isoth_omega_ode} yield a single ODE for $U$: \begin{equation}\label{U_ode} U'=\frac{a^2}{(U-\xi)^2-a^2}\Big(\beta+\frac{mU}{\xi}\Big). \end{equation} Using this in \eq{isoth_u_ode} gives \begin{equation}\label{Omega_ode} \frac{\Omega'}{\Omega}=-\frac{U-\xi}{(U-\xi)^2-a^2}\Big(\beta+\frac{mU}{\xi}\Big). \end{equation} Before analyzing the similarity ODEs we consider the jump conditions in similarity variables. \section{Rankine-Hugoniot and Entropy conditions for similarity flows} Consider the radial barotropic Euler system \eq{mass}-\eq{momentum}, and assume that a discontinuity propagates along the path $r=\mathcal R(t)$. The Rankine-Hugoniot conditions are then \begin{align} \dot{\mathcal R}\big[\!\!\big[\rho\big]\!\!\big] &= \big[\!\!\big[ \rho u\big]\!\!\big] \label{rh_1}\\ \dot{\mathcal R}\big[\!\!\big[\rho u\big]\!\!\big] &= \big[\!\!\big[ \rho u^2+p\big]\!\!\big], \label{rh_2} \end{align} where $\dot{} \equiv \frac{d}{dt} $. Here and below we use the convention that, for any quantity $q=q(t,r)$, $\big[\!\!\big[ q\big]\!\!\big]$ denotes the jump in $q$ as $r$ decreases, i.e., \[\big[\!\!\big[ q\big]\!\!\big]:=q_+-q_-\equiv q(t,\mathcal R(t)+)-q(t,\mathcal R(t)-).\] Next, denoting the local sound speed by \[c:=\sqrt{p'(\rho)},\] the entropy condition for a 1-shock requires that \begin{equation}\label{e1} u_--c_-> \dot{\mathcal R}> u_+-c_+, \end{equation} while the entropy condition for a 2-shock requires that \begin{equation}\label{e2} u_-+c_-> \dot{\mathcal R}> u_++c_+. \end{equation} \subsection{Radial isothermal similarity shocks} We next specialize to ``similarity shocks'' in radial isothermal flow: the pressure law is given by \eq{pressure} and the shock is assumed to propagate along a path of the form $\xi\equiv \bar \xi$, i.e., $\mathcal R(t)=\bar\xi t$. Furthermore, it is assumed that the density and velocity on either side of the shock are of the form \eq{fncl_relns}, with $\beta$ taking the same value on both sides. Let $(U_+,\Omega_+)$ and $(U_-,\Omega_-)$ denote the parts of the solution on the outside and inside of the shock, respectively. (``Outside'' and ``inside'' refer to further away from and closer to $r=0$, respectively.) The Rankine-Hugoiniot conditions reduce to \begin{align*} \bar\xi\big[\!\!\big[\Omega\big]\!\!\big] &= \big[\!\!\big[ \Omega U\big]\!\!\big] \\ \bar\xi\big[\!\!\big[\Omega U\big]\!\!\big] &= \big[\!\!\big[ \Omega (U^2+a^2)\big]\!\!\big], \end{align*} where $\big[\!\!\big[\cdot\big]\!\!\big]$ now denotes jump across $\xi=\bar\xi$. The entropy conditions \eq{e1}-\eq{e2} take the form \begin{align} &U_-(\bar\xi)> \bar\xi+a> U_+(\bar\xi)\qquad\text{for a 1-shock}\label{isoth_e_1}\\ &U_-(\bar\xi)> \bar\xi-a> U_+(\bar\xi)\qquad\text{for a 2-shock.}\label{isoth_e_2} \end{align} In particular, these relations show that for any shock in radial isothermal flow, the velocity necessarily decreases as we traverse the shock from the inside to the outside. Finally, setting $V_\pm:=U_\pm-\bar\xi$, where $U_\pm$ denotes $U_\pm(\bar\xi)$, the Rankine-Hugoniot conditions take the form $\big[\!\!\big[\Omega V\big]\!\!\big]=0$ and $\big[\!\!\big[\Omega VU+a^2\Omega\big]\!\!\big]=0$. It follows from these that $V_+V_-=a^2$, and that \begin{equation}\label{+ito-} U_+=\bar\xi+\frac{a^2}{U_--\bar\xi}\qquad \text{and}\qquad\Omega_+=\frac{(U_--\bar\xi)^2}{a^2}\Omega_-. \end{equation} Alternatively, solving for $V_-$ and $\Omega_-$, we have \begin{equation}\label{-ito+} U_-=\bar\xi+\frac{a^2}{U_+-\bar\xi}\qquad \text{and}\qquad\Omega_-=\frac{(U_+-\bar\xi)^2}{a^2}\Omega_+. \end{equation} \section{Converging-diverging isothermal flows}\label{conv_div_isothermal} By a ``converging-diverging solution'' we shall mean a radial similarity solution in which a wave approaches the origin, ``collapses'' there at some instant in time, resulting in a reflected wave moving away from the origin. Without loss of generality we set the time of collapse to be $t=0$. We shall search for this type of solutions within the class of isothermal similarity solutions introduced above. To be of physical interest the solutions should satisfy, as a minimum, the following requirements: \begin{itemize} \item[(A)] the velocity vanishes along $\{r=0\}$: $u(t,0)\equiv 0$; \item[(B)] at any fixed location $r>0$, the limits \[\lim_{t\to 0}u(t,r)\qquad\text{and}\qquad \lim_{t\to 0}\rho(t,r)\] both exist as finite numbers. (Note that this requirement leaves open the possibility that $\rho(0,r)$ and/or $u(0,r)$ may blow as $r\downarrow 0$.) \end{itemize} In addition we shall require that the density field is everywhere strictly positive: \begin{itemize} \item[(C)] the density never vanishes: $\rho(t,r)> 0$ for all $t\in\mathbb{R}$, $r\geq 0$. \end{itemize} Further constraints will be imposed later to guarantee that the solutions, as function of $(t,{\bf x})\in\mathbb{R}\times \mathbb{R}^n$, provide genuine weak solutions of the original, multi-d isothermal system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul}. In particular, we shall require that the conserved quantities map time continuously into $L^1_{loc}(\mathbb{R}^n)$; see Section \ref{weak_solns} and also Section \ref{final_rmks}. For the {\em full} Euler system (including conservation of energy) the seminal work \cites{gud} by Guderley established the existence of converging-diverging similarity solutions in which a shock wave propagates into a quiescent state near the origin, focuses (collapses) at the origin, and reflects an expanding shock wave. Building on the detailed work of Lazarus \cite{laz} (which also treats the case of a collapsing vacuum), the present authors recently showed in \cite{jt1} that these ``Guderley solutions'' provide examples of genuine, entropy admissible, weak solutions to the full, multi-d Euler system. A key feature of these converging-diverging shock solutions is that they provide concrete Euler flows suffering pointwise blowup of primary flow variables (as opposed to blowup of their gradients). Although the Guderley solutions establish the possibility of amplitude blowup in Euler flows for ideal gases, they are also at the borderline of the regime where one would expect the Euler system to be physically accurate. More precisely, in order to provide an {\em exact} weak solution, the sound speed in the quiescent state that the incoming shock moves into must vanish. For the ideal gas case under consideration, this means that the incoming shock does not experience any upstream counter-pressure. (The gas is at zero temperature there and this is sometimes referred to as a ``cold gas assumption.'') It appears reasonable that this lack of counter-pressure facilitates unbounded growth of the shock speed, with concomitant increases in pressure and temperature. It is unclear at present whether this is the (or part of the) mechanism driving the blowup in Guderley solutions for the full Euler system. The alternative is that the blowup is a purely geometric effect driven by {\em wave focusing}, much like what occurs for radial solutions of the linear, multi-d wave equation. {\em The main goal of the present work is to show that amplitude blowup can occur in converging-diverging flows for the simplified isothermal Euler model, even in the presence of an everywhere strictly positive pressure field.} To the best of our knowledge, the solutions we generate are the first examples of unbounded barotropic flows that meet the requirements (A)-(C) above. While these isothermal solutions are qualitatively different from the Guderley solutions for the full system described earlier (in particular, they are continuous up to collapse), they indicate that the real agent for blowup is the focusing of waves at the center of motion. On the other hand, it still remains an open problem to exhibit concrete flows for the full Euler system that exhibit blowup in the absence of zero-pressure regions. For completeness we include some remarks on what is known about radial Euler flows with ``general'' initial data. First, there is currently no result for the full, multi-d Euler system, radial or not, that guarantees global-in-time existence. For radial isentropic flows, i.e., solutions to \eq{mass}-\eq{momentum} with $p(\rho)=a^2\rho^\gamma$ and $\gamma>1$, results by Chen-Perepelitsa \cite{cp} and Chen-Schrecker \cite{cs} provide existence of weak, finite energy solutions via the method of compensated compactness. In fact, the recent work \cite{sch} is the first to show that the solutions one obtains in this manner provide genuine, weak solutions to the original, multi-d isentropic Euler system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} on {\em all} of space. On the other hand, there appears to be little hope of extending this approach (i.e., compensated compactness) to the radial full system, or even (for technical reasons \cite{sch1}) to the radial, isothermal ($\gamma=1$) system. As far as we know, the currently strongest, global existence result for the radial isothermal system applies to the case of {\em external} flows, i.e., for flows outside of a fixed ball. This problem was analyzed in \cite{mmu1} by exploiting the Glimm scheme, providing existence for a certain class of initial data of bounded variation; for an extension, see \cite{mmu2}. The results of the present paper shows that, in order to extend these results to solutions defined on {\em all} of space (i.e., including the origin), one must necessarily contend with unbounded solutions. For results closer to the present work, which concerns concrete Euler flows in several space dimensions, see Chapter 7 of Zheng's monograph \cite{zheng} on multi-d Riemann problems, some of which generate purely radial flows. However, we stress that the radial flows we construct below are not solutions to Riemann problems. Specifically, the solutions we display are necessarily non-constant in the radial direction at all times. The rest of the present paper is organized as follows. Section \ref{constr_conv_div_solns} provides a detailed construction of the radial speed $u(t,r)$ and the corresponding density $\rho(t,r)$ for converging-diverging similarity flows for the isothermal Euler system. In Section \ref{weak_solns} we briefly recall the definition of weak solutions to the barotropic Euler system, including its formulation for the special case of radial solutions. In Section \ref{sim_weak_solns} we verify that the radial similarity flows we construct provide genuine weak solutions to the original, multi-d isothermal Euler system. The main result is summarized in Theorem \ref{main_result}. Finally, Section \ref{final_rmks} collects some additional observations about the flows constructed in this paper. \section{Construction of converging-diverging isothermal flows}\label{constr_conv_div_solns} To construct concrete examples of converging-diverging isothermal similarity flows, we start with the ODE \eq{U_ode} for the velocity $U(\xi)$. This ODE has three critical points: the origin $(0,0)$ and the points $\pm P_{\ensuremath{\mathrm{w}}}:=(\pm\xi_{\ensuremath{\mathrm{w}}},\pm U_{\ensuremath{\mathrm{w}}})$, where \[\xi_{\ensuremath{\mathrm{w}}}:=-\frac{am}{m+\beta},\qquad U_{\ensuremath{\mathrm{w}}}:=\frac{a\beta}{m+\beta}.\] (The subscript ``w'' stands for ``weak,'' for reasons to be clear later.) We also observe that its solutions are symmetric about the origin: if $\xi\mapsto U(\xi)$ is a solution of \eq{U_ode}, so is $\xi\mapsto -U(-\xi)$. Instead of performing a lengthy analysis of all possible cases, from now on we focus on the cases where \begin{equation}\label{case3} -m<\beta<0 \qquad\text{and}\qquad m=1\quad\text{or}\quad m=2. \end{equation} In particular, $\xi_{\ensuremath{\mathrm{w}}}<0$ and $U_{\ensuremath{\mathrm{w}}}<0$ for all cases under consideration. Introducing the straight lines \[l_\pm:=\{U=\xi\pm a\}\qquad\text{and}\qquad \omega:=\{\beta+\textstyle\frac{mU}{\xi}=0\},\] we have that $\pm P_{\ensuremath{\mathrm{w}}}=l_\pm\cap\omega$. Linearizing \eq{U_ode} about the critical points $\pm P_\ensuremath{\mathrm{w}}$, we set \begin{equation}\label{lambdas} \lambda_\pm=\textstyle\frac{1}{2}\Big[\big(1+\frac{m}{2}(1+\mu)\big) \pm\sqrt{\big(1+\frac{m}{2}(1+\mu)\big)^2-2m(1+\mu)^2}\Big], \end{equation} where \[\mu:=\textstyle\frac{\beta}{m}\in(-1,0).\] It is immediate to verify that the radicand in \eq{lambdas} is strictly positive whenever \eq{case3} holds. An analysis of the critical points shows that: \begin{enumerate} \item[(a)] The point $P_{\ensuremath{\mathrm{w}}}$ is an unstable node for \eq{U_ode} whenever \eq{case3} holds, i.e., we have $0<\lambda_-<\lambda_+$. \item[(b)] There are two solutions leaving $P_{\ensuremath{\mathrm{w}}}$ along the directions $\pm(1,1-\lambda_+)$. \item[(c)] All other solutions leaving $P_{\ensuremath{\mathrm{w}}}$ do so along the directions $\pm(1,1-\lambda_-)$. \item[(d)] Whenever \eq{case3} holds we have $1-\lambda_+<0$ and $-\mu<1-\lambda_-<1$; thus all but the two solutions described in (b), enter the region between the straight lines $\omega$ and $l_+$. \item[(e)] There is a unique solution passing through $(0,0)$; it does so with slope $-\frac{\beta}{n}$, and this solution is located below $l_+$ and above $\omega$; it extends back (i.e., as $\xi$ decreases) to $P_{\ensuremath{\mathrm{w}}}$, approaching $P_{\ensuremath{\mathrm{w}}}$ along the direction $-(1,1-\lambda_-)$. \end{enumerate} We denote the unique solution described in (e) by $\hat U(\xi)$. It passes through the origin and, by symmetry about the origin, is defined for all $\xi\in[\xi_{\ensuremath{\mathrm{w}}},-\xi_{\ensuremath{\mathrm{w}}}]$, and connects to the third critical point $-P_\ensuremath{\mathrm{w}}$. See Figure 1. \begin{figure}\label{Figure_1} \centering \includegraphics[width=16cm,height=8cm]{U_complete} \caption{Complete $U(\xi)$-profile (schematic).} \end{figure} \subsection{The radial speed $u(t,r)$ for $t\leq 0$} The part of $\hat U(\xi)$ corresponding to $\xi\in[\xi_{\ensuremath{\mathrm{w}}},0]$ yields, via \eq{fncl_relns}${}_2$, the radial speed $u(t,r)$ within the sector \[S_-:=\{(r,t)\,:\,\xi_{\ensuremath{\mathrm{w}}}\leq\textstyle \frac{r}{t}\leq 0\}\] in the $(r,t)$-plane. Note that the choice of the solution $\hat U(\xi)$ in this region is dictated by requirement (A) above. Similarly, we shall use a certain portion of $\hat U(\xi)$ for $\xi>0$ to obtain the radial speed $u(t,r)$ within a sector \[S_+:=\{(r,t)\,:\,0\leq \textstyle \frac{r}{t}\leq \xi_\ensuremath{\mathrm{s}}\}.\] Here the value of $\xi_\ensuremath{\mathrm{s}}\in(0,-\xi_\ensuremath{\mathrm{w}})$, yet to be determined, corresponds to the path $t\mapsto \xi_\ensuremath{\mathrm{s}} t$ of an expanding shock wave for $t>0$. However, we first need to continue the relevant $U$-solution beyond $\xi_{\ensuremath{\mathrm{w}}}$, all the way down to $\xi=-\infty$. Now, there are infinitely many solutions of \eq{U_ode} defined for all $\xi<\xi_{\ensuremath{\mathrm{w}}}$, passing through $P_{\ensuremath{\mathrm{w}}}$, and with the property that they enter (as $\xi$ decreases) the region $\mathcal U$ to the left of $P_\ensuremath{\mathrm{w}}$ and above $\omega$, i.e., \[\mathcal U:=\{\,(\xi,U)\,:\, \xi< \xi_\ensuremath{\mathrm{w}}\quad \text{and}\quad U>-\mu\xi\,\}.\] Let $\check U(\xi)$ denote any such solution. We therefore have an infinity of choices for $\check U(\xi)$. As we shall see below, all of these solutions (that enter $\mathcal U$ at points along $\omega$) tend to finite limits at $\xi=-\infty$, as dictated by the first part of requirement (B) above. However, it will be convenient for the subsequent analysis to also have $\check U(-\infty)<0$. We proceed to show that there are solutions satisfying this constraint, as well as the constraints in \eq{case3}. \subsubsection{Asymptotics for large, negative $\xi$-values.} As is clear from the linearization of \eq{U_ode} at $P_\ensuremath{\mathrm{w}}$, all but one of the solutions $\check U(\xi)$ defined on $(-\infty,\xi_\ensuremath{\mathrm{w}})$ approach $P_\ensuremath{\mathrm{w}}$ along $(1,1-\lambda_-)$; all of these connect smoothly at $\xi=\xi_\ensuremath{\mathrm{w}}$ with the solution $\hat U(\xi)$ on $[\xi_w,0]$ considered above. The exception is the ``kink-solution'' $U_\ensuremath{\mathrm{k}}(\xi)$ which approaches $P_\ensuremath{\mathrm{w}}$ along $(1,1-\lambda_+)$. It is clear that $U_\ensuremath{\mathrm{k}}(\xi)$ lies above any solution $\check U(\xi)$ of \eq{U_ode} which is located in $\mathcal U$ and which exits $\mathcal U$ (as $\xi$ increases) at a point on $\omega$. For our purpose of having $\check U(-\infty)<0$, it therefore suffices to identify cases for which $U_\ensuremath{\mathrm{k}}(\xi)$ tends to a strictly negative limit at $\xi=-\infty$, and then employ $U_\ensuremath{\mathrm{k}}$ in our construction of $u(t,r)$ within the sector \[S'_-:=\{(r,t)\,:\,-\infty<\textstyle \frac{r}{t}\leq \xi_{\ensuremath{\mathrm{w}}}\}.\] We start by observing that for $(\xi,U)\in\mathcal U$ we have $U-\xi-a\geq U+\mu \xi\geq 0$ so that \[\frac{U+\mu \xi}{U-\xi-a}\leq 1\qquad\text{within $\mathcal U$}.\] Therefore, any solution $\check U(\xi)$ of \eq{U_ode} in $\mathcal U$ satisfies \[\check U'(\xi)=\frac{a^2m(\check U(\xi)+\mu \xi)}{\xi(\check U(\xi)-\xi-a)(\check U(\xi)-\xi+a)} \geq \frac{a^2m}{\xi(\check U(\xi)-\xi+a)}.\] Specializing to the kink-solution $U_\ensuremath{\mathrm{k}}(\xi)$, which satisfies $U_\ensuremath{\mathrm{k}}(\xi)>U_\ensuremath{\mathrm{w}}$ for $\xi<\xi_\ensuremath{\mathrm{w}}$, we obtain \[U_\ensuremath{\mathrm{k}}'(\xi) \geq \frac{a^2m}{\xi(U_\ensuremath{\mathrm{k}}(\xi)-\xi+a)} >\frac{ma^2}{\xi(U_\ensuremath{\mathrm{w}}+a-\xi)}\qquad\text{for $\xi<\xi_\ensuremath{\mathrm{w}}$.}\] Integrating from $\xi=-\infty$ to $\xi=\xi_\ensuremath{\mathrm{w}}$, and using that $U_\ensuremath{\mathrm{k}}(\xi_\ensuremath{\mathrm{w}})=U_\ensuremath{\mathrm{w}}$, yields \begin{equation}\label{limit} U_\ensuremath{\mathrm{k}}(-\infty)< U_\ensuremath{\mathrm{w}}+a^2m\int_{-\infty}^{\xi_\ensuremath{\mathrm{w}}}\frac{d\xi}{\xi(\xi-(a+U_\ensuremath{\mathrm{w}}))}. \end{equation} Therefore, whenever $m$ and $\beta$ satisfy $-m<\beta<0$, and are such that the right-hand side of \eq{limit} is non-positive, then the kink-solution $U_\ensuremath{\mathrm{k}}(\xi)$ tends to a strictly negative limit as $\xi\downarrow -\infty$. E.g., with $m=2$ and $\beta=-1$, the right-hand side of \eq{limit} takes the value zero, while for $m=1$ and $\beta=-\frac{1}{2}$ it takes a strictly negative value. \begin{assumption}\label{assmpn} From now on it is assumed that $m$ and $\beta$ are such that $m=1$ or $m=2$, $-m<\beta<0$, and at the same time \[U^*:=U_\ensuremath{\mathrm{k}}(-\infty)<0;\] the argument above demonstrates that such values of $m$ and $\beta$ exist. \end{assumption} \noindent As indicated above, we use the kink-solution $U_\ensuremath{\mathrm{k}}(\xi)$ to specify the radial speed $u(t,r)$, via \eq{fncl_relns}${}_2$, within the sector $S'_-:=\{(r,t)\,:\,-\infty<\textstyle \frac{r}{t}\leq \xi_{\ensuremath{\mathrm{w}}}\}.$ \subsection{The radial speed $u(t,r)$ for $t\geq 0$; the reflected shock.}\label{u_for_pos_t} Next, we need to specify the radial speed $u(t,r)$ within the sector \[S'_+:=\{(r,t)\,:\,\xi_{\ensuremath{\mathrm{s}}}<\textstyle \frac{r}{t}<\infty\},\] where $\xi_{\ensuremath{\mathrm{s}}}>0$ is yet to be determined. The relevant solution $\tilde U(\xi)$ of \eq{U_ode} (i.e., which is defined for $\xi\in (\xi_{\ensuremath{\mathrm{s}}},\infty)$) should give a radial speed $u(t,r)$ which is continuous across $\{t=0\}$. It follows that $\tilde U(\xi)$ must be the solution to \eq{U_ode} which approaches the value $U^*=U_\ensuremath{\mathrm{k}}(-\infty)$ as $\xi\uparrow\infty$. Now, as we integrate along decreasing $\xi$-values, in from $\xi=\infty$, the solution $\tilde U(\xi)$ remains below the solution $-U_\ensuremath{\mathrm{k}}(-\xi)$. This follows since the latter function is a solution of \eq{U_ode} (recall that solutions of \eq{U_ode} lie symmetrically about the origin), and that it starts out from $\xi=\infty$ with the value $-U^*>0>U^*=\tilde U(\infty)$. As a consequence we have that the solution $\tilde U(\xi)$ intersects the straight line $l_-$ at some $\xi$-value $\xi^*$ with $0<\xi^*<-\xi_\ensuremath{\mathrm{w}}$. Finally, to determine the shock location $\xi_\ensuremath{\mathrm{s}}$ we argue as follows. Returning to the solution $\hat U(\xi)$ introduced earlier, but now considered for $\xi\in(0,-\xi_\ensuremath{\mathrm{w}}]$, we let $\hat{\mathcal H}$ denote its associated ``Hugoniot locus.'' That is, $\hat{\mathcal H}$ is the set (curve) of points $(\xi,\hat H(\xi))$ that connect to a point on the solution curve $(\xi,\hat U(\xi))$ through a jump discontinuity with $U_-=\hat U(\xi)$ and $U_+=\hat H(\xi)$. According to \eq{+ito-}${}_1$, $\hat{\mathcal H}$ is the graph of the function \[\hat H(\xi):=\xi+\frac{a^2}{\hat U(\xi)-\xi}\qquad\text{for $0<\xi<-\xi_\ensuremath{\mathrm{w}}$.}\] The following claim follows directly from the properties of the solution $\hat U(\xi)$. \begin{claim}\label{claim} The function $\hat H(\xi)$ has the following properties: \begin{itemize} \item [(i)] $\hat H(\xi)<\xi-a$ for $0<\xi<-\xi_\ensuremath{\mathrm{w}}$, \item [(ii)] $\lim_{\xi\downarrow 0}\hat H(\xi)=-\infty$, and \item [(iii)] $\hat H(-\xi_\ensuremath{\mathrm{w}})=-U_\ensuremath{\mathrm{w}}$. \end{itemize} \end{claim} \noindent In particular, it follows from these properties that the graphs of $\hat H(\xi)$ and $\tilde U(\xi)$ intersect for some $\xi=\xi_\ensuremath{\mathrm{s}}\in (0,-\xi_\ensuremath{\mathrm{w}})$. (Numerical plots indicate that $\hat H(\xi)$ is strictly increasing on $(0,-\xi_\ensuremath{\mathrm{w}})$; if so, $\xi_\ensuremath{\mathrm{s}}$ is uniquely determined. However, we have not been able to provide an analytic proof for this.) It follows from part (i) of Claim \ref{claim} that the point of intersection lies below $l_-$. Since the graph of $\hat U(\xi)$ lies between $l_-$ and $l_+$ for $\xi\in (0,-\xi_\ensuremath{\mathrm{w}})$, we conclude from \eq{isoth_e_1}-\eq{isoth_e_2} that the jump discontinuity with with $U_-=\hat U(\xi_\ensuremath{\mathrm{s}})$ and $U_+=\hat H(\xi_s)=\tilde U(\xi_\ensuremath{\mathrm{s}})$ satisfies the entropy condition for a 2-shock. See Figure 1. \bigskip \noindent {\bf Summing up:} The radial speed $u(t,r)$ is defined in terms of the solutions $\hat U$, $U_\ensuremath{\mathrm{k}}$, and $\tilde U$ of the similarity ODE \eq{U_ode}, as follows: \begin{equation}\label{u_final} u(t,r)=U(\textstyle\frac{r}{t}) :=\left\{\begin{array}{ll} \hat U(\frac{r}{t}) & \xi_\ensuremath{\mathrm{w}}\leq \frac{r}{t}< \xi_\ensuremath{\mathrm{s}}\\\\ U_\ensuremath{\mathrm{k}}(\frac{r}{t}) & -\infty<\frac{r}{t}\leq \xi_\ensuremath{\mathrm{w}} \\\\ \tilde U(\frac{r}{t}) & \xi_\ensuremath{\mathrm{s}}<\frac{r}{t}< \infty. \end{array}\right. \end{equation} We note that requirement (A) above is met (since $\hat U(0)=0$). Furthermore, this solution contains a converging weak discontinuity (``kink'') propagating with constant speed along $\{\frac{r}{t}=\xi_\ensuremath{\mathrm{w}}\}$ for $t<0$ (i.e., $u$ is continuous while its first derivatives jump there), and an expanding, entropy admissible 2-shock discontinuity propagating with constant speed along $\{\frac{r}{t}=\xi_\ensuremath{\mathrm{s}}\}$ for $t>0$. For later use, we record that the radial speed at time of collapse $t=0$ takes the constant value \begin{equation}\label{speed_at_collapse} u(0,r)\equiv U^*=U_\ensuremath{\mathrm{k}}(-\infty)\qquad\text{for $r>0$.} \end{equation} \begin{remark} The function $\tilde U(\xi)$ is strictly decreasing on $(\xi_\ensuremath{\mathrm{s}},\infty)$ and tends to $U^*<0$ as $\xi\to\infty$. Numerical calculations show that there are cases for which $\tilde U(\xi_\ensuremath{\mathrm{s}})>0$ (e.g., this is the case when $m=2$, $\beta=-1$), showing that stagnation (vanishing flow velocity) may occur upstream of the expanding shock. \end{remark} \begin{remark} In the construction above of $U(\xi)$ on $(-\infty,\xi_\ensuremath{\mathrm{w}})$ we made use of the particular ``kink'' solution $U_\ensuremath{\mathrm{k}}(\xi)$. We note that, having established that $U_\ensuremath{\mathrm{k}}(-\infty)<0$, we could just as well have used any other solution $\check U(\xi)$ of \eq{U_ode} that is located within the region $\mathcal U$ and which exits $\mathcal U$ at a point along the line $\omega$. As noted above, any such solution $\check U(\xi)$ connects smoothly at $\xi=\xi_\ensuremath{\mathrm{w}}$ to the solution $\hat U(\xi)$ on $[\xi_\ensuremath{\mathrm{w}},0]$, and will therefore give converging flows without any weak discontinuities. As $U^*=U_\ensuremath{\mathrm{k}}(-\infty)<0$, it follows that any such solution $\check U(\xi)$ tends to a finite value, $U^{**}$ say, as $\xi\to-\infty$, where $U^{**}<U^*<0$. Then, starting from $U^{**}$ at $\xi=+\infty$ and integrating toward the origin, we would generate a solution $U^\circ(\xi)$ (instead of $\tilde U(\xi)$ as above), which again could be connected via a jump discontinuity to the solution $\hat U(\xi)$ on $[0,-\xi_\ensuremath{\mathrm{w}}]$. In particular, we may arrange that $U^{**}$ is so large negative that $U^\circ(\xi)$ intersects the Hugoniot curve $\hat H(\xi)$ below the $\xi$-axis; if so, no stagnation occurs in the corresponding flow. \end{remark} \subsection{The radial density field $\rho(t,r)$} With the radial speed defined for all $r\geq 0$ and $t\in\mathbb{R}$, we turn to the density which is given via \eq{fncl_relns}${}_1$, \begin{equation}\label{rho} \rho(t,r)=\sgn(t)|t|^\beta\Omega(\xi)\qquad\qquad \xi=\frac{r}{t}, \end{equation} where $\Omega$ solves the ODE \eq{isoth_omega_ode} \begin{equation}\label{Omega_ode_u} \frac{\Omega'(\xi)}{\Omega(\xi)}=-\frac{1}{a^2}(U(\xi)-\xi)U'(\xi), \end{equation} and $U(\xi)$ is given by \eq{u_final}. We need to argue that this ODE, together with the jump relations at $\xi_\ensuremath{\mathrm{s}}$, yield a physically acceptable density field $\rho(t,r)$ satisfying the requirements (B) and (C) in Section \ref{conv_div_isothermal}. As $\beta<0$, it is clear from the second part of requirement (B) that a necessary condition on $\Omega$ is that $\Omega(\pm\infty)=0$. However, this is not sufficient to guarantee that (B) holds, and we can therefore not use this as an initial condition for the $\Omega$-solution. Instead, as we shall see, we can freely assign $\Omega(0-)$ to be any negative constant $\Omega_0<0$. Having fixed $\Omega_0<0$ we then want to solve the ODE \eq{Omega_ode_u}, where $U(\xi)$ is given by \eq{u_final}. Before considering the details we outline the order of the various steps for constructing $\Omega(\xi)$. In what follows, $U(\xi)$ is always given by \eq{u_final}. We first solve \eq{Omega_ode_u} for $\xi\in [\xi_\ensuremath{\mathrm{w}},0]$, obtaining the solution $\hat \Omega(\xi)$ with the initial condition $\Omega(0-)=\Omega_0<0$. We then solve \eq{Omega_ode_u} for $\xi\in (-\infty,\xi_\ensuremath{\mathrm{w}}]$ with $\Omega(\xi_\ensuremath{\mathrm{w}})$ as initial data at $\xi=\xi_\ensuremath{\mathrm{w}}$, obtaining the solution $\Omega_\ensuremath{\mathrm{k}}(\xi)$. As for the velocity $U(\xi)$, the resulting function $\Omega(\xi)$ for $\xi\in(-\infty,0)$ suffers a weak discontinuity across $\xi=\xi_\ensuremath{\mathrm{w}}$. Below we shall show that $\Omega_\ensuremath{\mathrm{k}}(\xi)$ tends to zero as $\xi\to-\infty$, and furthermore that it does so in such a manner that \begin{equation}\label{rho_asymp} \lim_{t\uparrow0}\rho(t,r)=-C_-r^\beta, \end{equation} where $C_-<0$ is a constant; see \eq{lim_from_below}. This will ensure that the constraint (B) is satisfied for times approaching zero from below. Since $\beta<0$, it also demonstrates that the density field we construct suffers blowup at the origin. We next need to solve for the density field $\rho(t,r)$ for $\xi_\ensuremath{\mathrm{s}}<\xi<\infty$, and for this it is convenient to switch to the independent variable \[x:=\frac{1}{\xi}=\frac{t}{r},\] and set \[D(x):=\Omega(\xi).\] To select the relevant $D$-solution we linearize the ODE for $D(x)$ about the origin in the $(x,D)$-plane and observe that this is a node. The leading order behavior of the solutions near the origin are of the form \[D(x)\sim C |x|^{|\beta|},\qquad \text{$C$ constant.}\] In terms of $\rho(t,r)$ this implies that \[\lim_{t\downarrow0}\rho(t,r)=C_+r^\beta,\] for a constant $C_+$. Continuity of the density field $\rho(t,r)$ across $\{t=0\}$ requires that we choose $C_+=-C_-$, where $C_-$ is as in \eq{rho_asymp}. This choice fixes a unique $D$-solution $\tilde D(x)$ for $x\gtrsim 0$, which is then unproblematic to extend to all of $[0,x_\ensuremath{\mathrm{s}})$, where $x_\ensuremath{\mathrm{s}}=\frac{1}{\xi_\ensuremath{\mathrm{s}}}$. Switching back to $\xi$ as independent variable, we set \[\tilde \Omega(\xi):=\tilde D(\textstyle\frac{1}{\xi})\qquad\text{for $\xi_\ensuremath{\mathrm{s}}<\xi<\infty$.}\] In particular, this provides us with the value $\tilde\Omega(\xi_\ensuremath{\mathrm{s}}+)$ at the immediate outside of the expanding shock-wave propagating along $\xi=\xi_\ensuremath{\mathrm{s}}$. Applying the Rankine-Hugoniot condition \eq{-ito+}${}_2$ with $\bar\xi=\xi_\ensuremath{\mathrm{s}}$ and $\Omega_\pm=\Omega(\xi_\ensuremath{\mathrm{s}}\pm)$, we thus determine $\Omega(\xi_\ensuremath{\mathrm{s}}-)$. This, finally, provides the initial data at $\xi=\xi_\ensuremath{\mathrm{s}}-$ for the relevant solution $\hat\Omega(\xi)$ of \eq{Omega_ode_u} for $\xi\in (0,\xi_\ensuremath{\mathrm{s}})$. This last step of solving \eq{Omega_ode_u} on $(0,\xi_\ensuremath{\mathrm{s}})$ is unproblematic and yields a final limiting value \[\Omega_0'=\lim_{\xi\downarrow 0}\hat\Omega(\xi).\] We note that differently from the velocity $\hat U(\xi)$, which takes the value zero at $\xi=0$, the function $\hat\Omega(\xi)$ will suffer a jump discontinuity there. Finally, it is easily verified that the resulting density field satisfies $\rho(t,r)>0$ for all $t\in \mathbb{R}$, $r>0$. See Figure 2. We proceed with the details. \begin{figure}\label{Figure_2} \centering \includegraphics[width=16cm,height=8cm]{Omega_complete} \caption{Complete $U(\xi)$-profile (schematic).} \end{figure} \subsection{Asymptotics of the density $\rho(t,r)$ for $t\leq 0$} The first step is to solve \begin{equation}\label{hat_Omega_ode_1} \frac{\hat\Omega'(\xi)}{\hat\Omega(\xi)}=-\frac{1}{a^2}(\hat U(\xi)-\xi)\hat U'(\xi)=:\hat F(\xi) \qquad\text{for $\xi\in[\xi_\ensuremath{\mathrm{w}},0]$,} \end{equation} where $\hat U(\xi)$ was determined above. As initial data we fix any constant $\Omega_0<0$ and set \[\hat \Omega(0-):=\Omega_0.\] It follows from the properties of $\hat U(\xi)$ that $\hat F(\xi)$ is a bounded, smooth function on $[\xi_\ensuremath{\mathrm{w}},0]$, such that solving \eq{hat_Omega_ode_1} is unproblematic. We note that \begin{equation}\label{Omega_sign1} \hat \Omega(\xi)< 0\qquad\text{and}\qquad \hat \Omega'(\xi)\geq 0\qquad\text{for $\xi\in[\xi_\ensuremath{\mathrm{w}},0]$.} \end{equation} Next we want to solve \begin{equation}\label{Omega_k_ode} \frac{\Omega_\ensuremath{\mathrm{k}}'(\xi)}{\Omega_\ensuremath{\mathrm{k}}(\xi)}=-\frac{1}{a^2}(U_\ensuremath{\mathrm{k}}(\xi)-\xi) U'_\ensuremath{\mathrm{k}}(\xi)=: F_\ensuremath{\mathrm{k}}(\xi) \qquad\text{for $\xi\in(-\infty,\xi_\ensuremath{\mathrm{w}}]$,} \end{equation} where $U_\ensuremath{\mathrm{k}}(\xi)$ was determined above. To establish \eq{rho_asymp} we first show that \begin{equation}\label{est_0} \int_{-\infty}^{\xi_\ensuremath{\mathrm{w}}} |F_\ensuremath{\mathrm{k}}(\eta)-\textstyle\frac{\beta}{\eta}|\,d\eta<\infty. \end{equation} Indeed, by using that \[U'_\ensuremath{\mathrm{k}}=\frac{a^2}{(U_\ensuremath{\mathrm{k}}-\xi)^2-a^2}\Big(\beta+\frac{mU_\ensuremath{\mathrm{k}}}{\xi}\Big),\] together with the fact that $U_\ensuremath{\mathrm{k}}(\xi)\to U^*=U_\ensuremath{\mathrm{k}}(-\infty)<0$, it is straightforward to verify that \begin{equation}\label{est_1} |F_\ensuremath{\mathrm{k}}(\xi)-\textstyle\frac{\beta}{\xi}|\leq\frac{C}{\xi^2}\qquad\text{for $\xi\in(-\infty,\xi_\ensuremath{\mathrm{w}}]$,} \end{equation} for a suitable constant $C$, and \eq{est_0} follows. Integrating \eq{Omega_k_ode}, we obtain \[\Omega_\ensuremath{\mathrm{k}}(\xi)=\Omega_\ensuremath{\mathrm{w}}\frac{|\xi|^\beta}{|\xi_\ensuremath{\mathrm{w}}|^\beta} \cdot\exp\Big(\int_{\xi}^{\xi_\ensuremath{\mathrm{w}}} \textstyle\frac{\beta}{\eta}-F_\ensuremath{\mathrm{k}}(\eta)\,d\eta\Big),\] where $\Omega_\ensuremath{\mathrm{w}}:=\Omega_\ensuremath{\mathrm{k}}(\xi_\ensuremath{\mathrm{w}})<0$. Applying \eq{est_1} yields \begin{equation}\label{Omega_asymp} \Omega_\ensuremath{\mathrm{k}}(\xi) \sim C_-|\xi|^\beta\qquad\text{as $\xi\to-\infty$,} \end{equation} where \[C_-=\frac{\Omega_\ensuremath{\mathrm{w}}}{|\xi_\ensuremath{\mathrm{w}}|^\beta} \cdot\exp\Big(\int_{\xi}^{\xi_\ensuremath{\mathrm{w}}} \textstyle\frac{\beta}{\eta}-F_\ensuremath{\mathrm{k}}(\eta)\,d\eta\Big)<0.\] Applying this in \eq{rho} we obtain \begin{equation}\label{lim_from_below} \lim_{t\uparrow0}\rho(t,r)=\lim_{t\uparrow0}\,\,\sgn(t)|t|^\beta\Omega_\ensuremath{\mathrm{k}}(\textstyle\frac{r}{t}) =-C_-r^\beta\qquad \text{at any fixed location $r>0$,} \end{equation} verifying \eq{rho_asymp}. We also note that \eq{Omega_k_ode}, together with the properties of $U_\ensuremath{\mathrm{k}}(\xi)$, imply that \begin{equation}\label{Omega_sign2} \Omega_\ensuremath{\mathrm{k}}(\xi)<0\qquad\text{and}\qquad \Omega_\ensuremath{\mathrm{k}}'(\xi)<0\qquad\text{for $(-\infty,\xi_\ensuremath{\mathrm{w}})$.} \end{equation} \subsection{The density $\rho(t,r)$ for $t\geq 0$.} To identify the relevant solution $\tilde \Omega(\xi)$ for $\xi\in (\xi_\ensuremath{\mathrm{s}},\infty)$, we switch to the independent variable $x=\frac{1}{\xi}$ and set $\tilde D(x)=\tilde \Omega(\frac{1}{x})$. The ODE for $\tilde D(x)$ is given by \eq{Omega_ode_u} \begin{equation}\label{D_ode} \frac{\tilde D'(x)}{\tilde D(x)}=\frac{1}{a^2x^2} \big[\textstyle \tilde U\big(\frac{1}{x}\big)-\frac{1}{x}\big]\tilde U'\big(\frac{1}{x}\big) \qquad \text{for $0<x<x_\ensuremath{\mathrm{s}},$} \end{equation} where $\tilde U(\xi)$ was determined above. It follows from requirement (B) in Section \ref{conv_div_isothermal} that we must have $\tilde D(0)=0$. Linearizing \eq{D_ode} about $(x,\tilde D)=(0,0)$ shows that the origin is a node where \begin{equation}\label{asymp_2} \tilde D(x)\sim C_+ x^{-\beta}\qquad \text{for $x\gtrsim 0$,} \end{equation} or \[\tilde \Omega(\xi)\sim C_+ \xi^{\beta}\qquad \text{as $\xi\to+\infty$.}\] This gives \begin{equation}\label{lim_from_above} \lim_{t\downarrow 0} \rho(t,r)=\lim_{t\downarrow0}\,\,t^\beta\tilde \Omega(\textstyle\frac{r}{t}) =C_+r^\beta\qquad \text{at any fixed location $r>0$.} \end{equation} Comparing with \eq{lim_from_below} and imposing continuity of $\rho(t,r)$ across $\{t=0\}$, implies that $C_+=-C_-$, and this selects the unique, relevant solution $\tilde D(x)$ for $x\gtrsim 0$. It is now unproblematic to integrate \eq{D_ode} for $x\in (0,x_\ensuremath{\mathrm{s}})$ (where $x_\ensuremath{\mathrm{s}}=\frac{1}{\xi_\ensuremath{\mathrm{s}}}$), and it follows from \eq{D_ode}, together with the properties of $\tilde U(\xi)$, \eq{asymp_2}, and $C_+>0$, that $\tilde D(x)>0$ and $\tilde D'(x)>0$ for $0<x<x_\ensuremath{\mathrm{s}}$. We therefore obtain that \begin{equation}\label{Omega_sign3} \tilde \Omega(\xi)>0\qquad\text{and}\qquad \tilde \Omega'(\xi)<0\qquad\text{for $\xi\in(\xi_\ensuremath{\mathrm{s}},\infty)$.} \end{equation} Having obtained $\tilde \Omega(\xi)$ for $\xi>\xi_\ensuremath{\mathrm{s}}$, we use the Rankine-Hugoniot relation \eq{-ito+}${}_2$ with $\bar \xi=\xi_\ensuremath{\mathrm{s}}$, $\Omega_+=\tilde \Omega(\xi_\ensuremath{\mathrm{s}})$ and $U_+=\tilde U(\xi_\ensuremath{\mathrm{s}})$, to calculate $\Omega_-$. This last value is used as initial data at $\xi=\xi_\ensuremath{\mathrm{s}}$ for the ODE \begin{equation}\label{hat_Omega_ode_2} \frac{\hat\Omega'(\xi)}{\hat\Omega(\xi)}=-\frac{1}{a^2}(\hat U(\xi)-\xi)\hat U'(\xi) \qquad\text{for $\xi\in(0,\xi_\ensuremath{\mathrm{s}})$.} \end{equation} We note that, since $(\tilde U(\xi_\ensuremath{\mathrm{s}})-\xi_\ensuremath{\mathrm{s}})^2>a^2$, \eq{-ito+}${}_2$ gives \[\hat \Omega(\xi_\ensuremath{\mathrm{s}})>\tilde \Omega(\xi_\ensuremath{\mathrm{s}})>0.\] It then follows from the properties of $\hat U(\xi)$ that the right-hand side of \eq{hat_Omega_ode_2} is a bounded and positive function on $[0,\xi_\ensuremath{\mathrm{s}}]$. Consequently, $\hat\Omega(\xi)$ is increasing there and approaches a strictly positive value $\Omega_0'$ at $\xi=0+$: \begin{equation}\label{Omega_sign4} \hat \Omega(\xi)>0\qquad\text{and}\qquad \hat\Omega'(\xi)>0 \qquad\text{for $\xi\in(0,\xi_\ensuremath{\mathrm{s}})$,} \qquad\text{and}\qquad\lim_{\xi\downarrow 0}\hat \Omega(\xi)=\Omega_0'>0. \end{equation} \bigskip \noindent {\bf Summing up:} The density field $\rho(t,r)$ is defined in terms of the solutions $\hat \Omega$, $\Omega_\ensuremath{\mathrm{k}}$, and $\tilde \Omega$ of the similarity ODE \eq{Omega_ode} as determined above, as follows: \begin{equation}\label{rho_final} \rho(t,r)=\sgn(t)|t|^\beta\Omega(\textstyle\frac{r}{t}) :=\left\{\begin{array}{ll} - |t|^\beta\hat\Omega(\frac{r}{t}) & \xi_\ensuremath{\mathrm{w}}\leq \frac{r}{t}\leq 0\\\\ - |t|^\beta \Omega_\ensuremath{\mathrm{k}}(\frac{r}{t}) & -\infty<\frac{r}{t}\leq \xi_\ensuremath{\mathrm{w}} \\\\ t^\beta\tilde\Omega(\frac{r}{t}) & \xi_\ensuremath{\mathrm{s}}<\frac{r}{t}< \infty \\\\ t^\beta\hat \Omega(\frac{r}{t}) &0\leq \frac{r}{t}< \xi_\ensuremath{\mathrm{s}}.\\ \end{array}\right. \end{equation} We note that, as for the radial speed given by \eq{u_final}, the density field suffers a weak discontinuity across $\{\frac{r}{t}=\xi_\ensuremath{\mathrm{w}}\}$ for $t<0$, and a jump discontinuity across $\{\frac{r}{t}=\xi_\ensuremath{\mathrm{s}}\}$ for $t>0$. As detailed at the end of Section \ref{u_for_pos_t}, the resulting shock wave along $\{r=\xi_\ensuremath{\mathrm{s}} t\}$ is, by construction, an entropy admissible 2-shock for the isothermal Euler system. Next, recalling \eq{Omega_sign1}, \eq{Omega_sign2}, \eq{Omega_sign3}, and \eq{Omega_sign4}, we have that $\Omega(\xi)\neq 0$ for all values of $\xi$. Furthermore, the density field at the time of collapse $t=0$ is given by \begin{equation}\label{density_at_collapse} \rho(0,r)=|C_-|r^\beta\qquad r>0. \end{equation} It follows from this that requirement (C) above is met by the density field given by \eq{rho_final}: $\rho(t,r)>0$ for all $t\in\mathbb{R}$ and all $r\geq 0$. Finally, \eq{lim_from_below}, \eq{lim_from_above}, and the choice $C_+=-C_-$, show that also requirement (B) is satisfied. \begin{remark} The above construction of $\rho(t,r)$ and $u(t,r)$ provides a 2-parameter family of concrete solutions to the radial, isothermal Euler system in $n=2$ and $n=3$ space dimensions. The solutions depend on the similarity exponent $\beta$, which varies in $(-n+1,0)$ so as to satisfy Assumption \ref{assmpn}, and on the constant $\Omega_0<0$, which determines the density along the center of motion before collapse ($\rho(t,0)=|\Omega_0||t|^\beta$ for $t<0$). \end{remark} \section{Weak and radial weak Euler solutions} \label{weak_solns} It remains to verify that the radial solutions of the isothermal Euler system constructed above do indeed provide genuine, weak solutions to the original, multi-d isothermal Euler system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul}. In this section we formulate the definition of a weak solution to the barotropic Euler system: first for general, multi-d solutions, and then specialized to the case of radial solutions. \subsection{Multi-d weak solutions} We write $\rho(t)$ for $\rho(t,\cdot)$ etc., ${\bf u}=(u_1,\dots,u_n)$, $u:=|{\bf u}|$, and let ${\bf x}=(x_1,\dots,x_n)$ denote the spatial variable in $\mathbb{R}^n$, while $r=|{\bf x}|$ varies over $\mathbb{R}_0^+=[0,\infty)$. \begin{definition}\label{weak_soln} Consider the compressible, isothermal Euler system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} in $n$ space dimensions with a given pressure function $p=p(\rho)\geq 0$. Then the measurable functions $\rho,\, u_1,\dots,u_n:\mathbb{R}_t\times \mathbb{R}_{\bf x}^n\to \mathbb{R}$ constitute a {\em weak solution} to \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} provided that: \begin{itemize} \item[(1)] the maps $t\mapsto \rho(t)$ and $t\mapsto \rho(t) u(t)$ belong to $C^0(\mathbb{R}_t;L^1_{loc}(\mathbb{R}^n_{\bf x}))$; \item[(2)] the functions $\rho u^2$ and $p$ belong to $L^1_{loc}(\mathbb{R}_t\times\mathbb{R}^n_{\bf x})$; \item[(3)] the conservation laws for mass and momentum are satisfied weakly in sense that \begin{equation}\label{m_d_mass_weak} \int_\mathbb{R}\int_{\mathbb{R}^n} \rho\varphi_t+\rho{\bf u}\cdot\nabla_{\bf x}\varphi \, d{\bf x}dt =0 \end{equation} and \begin{equation}\label{m_d_mom_weak} \int_\mathbb{R}\int_{\mathbb{R}^n} \rho u_i\varphi_t +\rho u_i{\bf u}\cdot\nabla_{\bf x}\varphi+p\varphi_{x_i}\, d{\bf x}dt =0 \qquad \text{for $i=1,\dots, n$,} \end{equation} whenever $\varphi\in C_c^1(\mathbb{R}_t\times \mathbb{R}^n_{\bf x})$ ($C^1$ functions with compact support). \end{itemize} \end{definition} \begin{remark} Here, condition (1) guarantees that the conserved quantities define continuous maps into $L^1_{loc}(\mathbb{R}^n_{\bf x})$, which is the natural function space in this setting. Taken together, conditions (1) and (2) ensure that all terms occurring in the weak formulations \eq{m_d_mass_weak} and \eq{m_d_mom_weak} are locally integrable in space and time. \end{remark} \begin{remark} Our goal is to show that the converging-diverging flow \begin{equation}\label{assmbld} \rho(t,{\bf x})=\rho(t,r),\qquad {\bf u}(t,{\bf x})=u(t,r)\frac{{\bf x}}{r}, \end{equation} where $\rho(t,r)$ and $u(t,r)$ are given by \eq{rho_final} and \eq{u_final}, respectively, constitute a weak solution to \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} (with $p=a^2\rho$) according to the definition above. Since these flows by construction involve a single, compressive shock wave, we do not address admissibility of weak solutions. \end{remark} \subsection{Radial weak solutions} We next rewrite Definition \ref{weak_soln} for radial solutions. For this we use the following notation. As above $m:=n-1$ and we set \[\mathbb{R}^+=(0,\infty),\qquad \mathbb{R}_0^+=[0,\infty),\qquad L^1_{(loc)}(dt\times r^mdr)=L^1_{(loc)}(\mathbb{R}\times\mathbb{R}^+_0,dt\times r^mdr).\] Also, $C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ denotes the set of real-valued functions $\psi(t,r)$ defined on $\mathbb{R}\times\mathbb{R}^+_0$ and with the property that $\psi$ is $C^1$ smooth on $\mathbb{R}\times\mathbb{R}^+_0$ and vanishes outside $[-\bar t,\bar t]\times[0,\bar r]$ for some $\bar t,\, \bar r\in\mathbb{R}^+$. Finally, we let $C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$ denote the set of those functions $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ with the additional property that $\psi(t,0)\equiv 0$. Using these function classes, the weak formulation of the multi-d Euler system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul}, for radial solutions, takes the following form. \begin{definition}\label{rad_symm_weak_soln} Consider the radial version \eq{mass}-\eq{momentum} of the compressible Euler system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul} with a given pressure function $p=p(\rho)\geq 0$. Then the measurable functions $\rho,\, u:\mathbb{R}_t\times \mathbb{R}^+_r\to \mathbb{R}$ constitute a {\em radial weak solution} to \eq{mass}-\eq{momentum} provided that: \begin{itemize} \item[(i)] the maps $t\mapsto \rho(t)$ and $t\mapsto \rho(t)u(t)$ belong to $C^0(\mathbb{R}_t;L^1_{loc}(r^mdr))$; \item[(ii)] the functions $\rho u^2$ and $p$ belong to $L^1_{loc}(dt\times r^mdr)$; \item[(iii)] the conservation laws for mass and momentum are satisfied in the sense that \begin{align} \int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho\psi_t+\rho u\psi_r\right) r^mdrdt &=0 \qquad\forall \psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0) \label{radial_mass_weak}\\ \int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho u\psi_t +\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right) r^mdrdt &=0 \qquad\forall \psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0).\label{radial_mom_weak} \end{align} \end{itemize} \end{definition} The demonstration that a radial weak solution $(\rho,u)$ yields, via \eq{assmbld}, a weak solution of the multi-d system according to Definition \ref{weak_soln}, was provided by Hoff \cite{hoff} in the context of radial, isentropic Navier-Stokes flows. (See \cite{jt1} for the corresponding analysis in the case of radial, non-isentropic Euler flows). \section{Radial converging-diverging similarity solutions as weak solutions} \label{sim_weak_solns} In this section we return to isothermal flow ($p=a^2\rho$) and the radial converging-diverging similarity solutions constructed in Section \ref{constr_conv_div_solns}. We want to establish properties (i), (ii), and (iii) in Definition \ref{rad_symm_weak_soln} for these solutions, and we first consider the continuity and integrability requirements in (i) and (ii). The weak forms of the equations are treated in Section \ref{weak_forms}. \subsection{Continuity and local integrability} With $\rho(t,r)$ and $u(t,r)$ given by \eq{rho_final} and \eq{u_final}, we proceed to verify parts (i) and (ii) of Definition \ref{rad_symm_weak_soln}. For this we fix $\bar r>0$, define \[M(t;\bar r):=\int_0^{\bar r} \rho(t,r)r^m\, dr,\qquad I_q(t;\bar r):=\int_0^{\bar r} \rho(t,r)|u(t,r)|^qr^m\, dr\qquad (q=1,\, 2),\] and observe that, in the particular case under consideration, where $p\propto \rho$, (i) and (ii) both follow once we verify that the maps $t\mapsto M(t;\bar r)$, $t\mapsto I_1(t;\bar r)$, and $t\mapsto I_2(t;\bar r)$ are continuous at all times $t\in\mathbb{R}$. Now, as $\rho(t,r)$ and $u(t,r)$ are bounded functions, except at the time of collapse ($t=0$), it is sufficient to verify the continuity of $M(t;\bar r)$ and $I_q(t;\bar r)$ ($q=1$, $2$) across $t=0$. According to \eq{density_at_collapse}, together with the standing assumption $\beta+m>0$, we have that $M(0;\bar r)$ is finite and given by \[M(0;\bar r)=\frac{|C_-|}{\beta+n}\bar r^{\beta+n}.\] For $t<0$ (and small enough that $\xi_\ensuremath{\mathrm{w}} t<\bar r$) we have \begin{align*} M(t;\bar r)&=\int_0^{\bar r}\rho(t,r) r^m\, dr =\int_0^{\bar r/t}\sgn(t)|t|^\beta |\Omega(\xi)|(t\xi)^mt\, d\xi\nonumber\\ &=|t|^{\beta+n}\Big[\int_{\bar r/t}^{\xi_\ensuremath{\mathrm{w}}}|\Omega_\ensuremath{\mathrm{k}}(\xi)||\xi|^m\, d\xi +\int_{\xi_\ensuremath{\mathrm{w}}}^0|\hat \Omega(\xi)||\xi|^m\, d\xi \Big]. \end{align*} Here the last term in the brackets is a bounded number, while L'H\^ opital's rule applied to the first term gives \begin{align*} \lim_{t\uparrow0}M(t;\bar r)&=\lim_{t\uparrow0}\frac{1}{|t|^{-\beta-n}} \int_{\bar r/t}^{\xi_\ensuremath{\mathrm{w}}}|\Omega_\ensuremath{\mathrm{k}}(\xi)||\xi|^m\, d\xi\nonumber\\ &=\lim_{t\uparrow0}\frac{\bar r^n}{\beta+n}|t|^\beta|\Omega_\ensuremath{\mathrm{k}}({\textstyle\frac{\bar r}{t}})| =\frac{|C_-|\bar r^{\beta+n}}{\beta+n}, \end{align*} where we have used \eq{lim_from_below}. An entirely similar calculation, now using \eq{lim_from_above} and with $\xi_\ensuremath{\mathrm{s}}$ playing the role of $\xi_\ensuremath{\mathrm{w}}$, shows that \[\lim_{t\downarrow0}M(t;\bar r)=\lim_{t\downarrow0} \frac{\bar r^n}{\beta+n}t^\beta\tilde\Omega({\textstyle\frac{\bar r}{t}}) =\frac{C_+\bar r^{\beta+n}}{\beta+n}.\] As $C_+=|C_-|$, this establishes the continuity of $t\mapsto M(t;\bar r)$ at time $t=0$, and thus for all times. Next, according to \eq{speed_at_collapse} and \eq{density_at_collapse}, we have \[I_q(0;\bar r)=\int_0^{\bar r} \rho(0,r)|u(0,r)|^qr^m\, dr =\frac{|C_-||U^*|^q}{\beta+n}\bar r^{\beta+n}.\] As above, for $t\lesssim 0$, we have \begin{align*} I_q(t;\bar r)&=\int_0^{\bar r}\rho(t,r) |u(t,r)|^q r^m\, dr =\int_0^{\bar r/t}\sgn(t)|t|^\beta |\Omega(\xi)||U(\xi)|^q(t\xi)^mt\, d\xi\nonumber\\ &=|t|^{\beta+n}\Big[\int_{\bar r/t}^{\xi_\ensuremath{\mathrm{w}}}|\Omega_\ensuremath{\mathrm{k}}(\xi)||U_\ensuremath{\mathrm{k}}(\xi)|^q|\xi|^m\, d\xi +\int_{\xi_\ensuremath{\mathrm{w}}}^0|\hat \Omega(\xi)||\hat U(\xi)|^q|\xi|^m\, d\xi \Big]. \end{align*} Again, here the last term in the brackets is a bounded number, while L'H\^ opital's rule applied to the first term gives \begin{align*} \lim_{t\uparrow0}I_q(t;\bar r)&=\lim_{t\uparrow0}\frac{1}{|t|^{-\beta-n}} \int_{\bar r/t}^{\xi_\ensuremath{\mathrm{w}}}|\Omega_\ensuremath{\mathrm{k}}(\xi)||U_\ensuremath{\mathrm{k}}(\xi)|^q|\xi|^m\, d\xi\nonumber\\ &=\lim_{t\uparrow0}\frac{\bar r^n}{\beta+n}|t|^\beta|\Omega_\ensuremath{\mathrm{k}}({\textstyle\frac{\bar r}{t}})| |U_\ensuremath{\mathrm{k}}({\textstyle\frac{\bar r}{t}})|^q =\frac{|C_-||U^*|^q}{\beta+n}\bar r^{\beta+n}, \end{align*} where we have used \eq{lim_from_below}. A similar calculation shows that \[\lim_{t\uparrow0}I_q(t;\bar r) =\frac{C_+|U^*|^q}{\beta+n}\bar r^{\beta+n},\] As $C_+=|C_-|$, this establishes the continuity of the maps $t\mapsto I_q(t;\bar r)$, $q=1$, $2$, at time $t=0$, and thus for all times. We have thus verified requirements (i) and (ii) of Definition \ref{rad_symm_weak_soln} for the isothermal converging-diverging solutions $(\rho(t,r),u(t,r))$ constructed in Section \ref{constr_conv_div_solns}. \subsection{Weak form of the equations}\label{weak_forms} Finally, for part (iii) of Definition \ref{rad_symm_weak_soln}, we need to verify the weak forms \eq{radial_mass_weak}, \eq{radial_mom_weak}. For this we shall exploit that the local integrability properties in parts (i) and (i) of Definition \ref{rad_symm_weak_soln} have been verified. The issue will then reduce to estimating the fluxes of the conserved quantities across spheres of vanishing radii. For $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$, with $\supp\psi\subset[-T,T]\times [0,\bar r]$, and any small $\delta>0$, we define the regions \[J_\delta=\left\{(t,r)\,|\, -T<t<T,\, \delta<r<\bar r,\, \textstyle\frac{t}{r}<\frac{1}{\xi_\ensuremath{\mathrm{s}}} \right\},\] and \[K_\delta=\left\{(t,r)\,|\, -T<t<T,\, \delta<r<\bar r,\, \textstyle\frac{t}{r}>\frac{1}{\xi_\ensuremath{\mathrm{s}}} \right\},\] (see Figure 3), and set \begin{align} M(\psi)&:=\iint_{\mathbb{R}\times\mathbb{R}^+} \left(\rho\psi_t+\rho u\psi_r\right) r^mdrdt \nonumber\\ &= \Big\{\iint_{\mathbb{R}\times [0,\delta]} +\iint_{J_\delta} +\iint_{K_\delta}\Big\} \left(\rho\psi_t+\rho u\psi_r\right) r^mdrdt \nonumber\\ &=:M_\delta(\psi) +\Big\{\iint_{J_\delta} +\iint_{K_\delta}\Big\} \left(\rho\psi_t+\rho u\psi_r\right) r^mdrdt \label{M_psi} \end{align} and \begin{align} I(\psi)&:=\iint_{\mathbb{R}\times\mathbb{R}^+}\left(\rho u\psi_t +\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right) r^mdrdt \nonumber\\ &= \Big\{\iint_{\mathbb{R}\times [0,\delta]} +\iint_{J_\delta} +\iint_{K_\delta}\Big\} \left(\rho u\psi_t +\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right) r^mdrdt \nonumber\\ &=:I_\delta(\psi) +\Big\{\iint_{J_\delta}+\iint_{K_\delta}\Big\} \left(\rho u\psi_t +\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right) r^mdrdt. \label{I_psi} \end{align} The goal is to verify that $M(\psi)$ and $I(\psi)$ vanish by showing that the right hand sides of \eq{M_psi} and \eq{I_psi} vanish as $\delta\downarrow 0$. \begin{figure}\label{Figure_3} \centering \includegraphics[width=8cm,height=9cm]{Regions} \caption{Regions of integration in the weak formulation.} \end{figure} We first note that the continuity of the maps $t\mapsto M(t;\bar r)$, $t\mapsto I_1(t;\bar r)$, and $t\mapsto I_2(t;\bar r)$, which was established above, implies the local $r^mdrdt$-integrability of $\rho$, $p\propto \rho$, $\rho u$, and $\rho u^2$. As a consequence, both $M_\delta(\psi)$ and $I_\delta(\psi)$ tend to zero as $\delta\downarrow 0$. (Note that for $I_\delta(\psi)$, we make use of the fact that $\psi$ belongs to the space $C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$; in particular, $\frac{m\psi}{r}$ is a bounded term.) It remains to estimate the integrals over $J_\delta$ and $K_\delta$ in \eq{M_psi} and \eq{I_psi}. For this we first recall that $(\rho,u)$, by construction, is a classical (Lipschitz) solution of the isentropic Euler system \eq{mass}-\eq{momentum} within each of $J_\delta$ and $K_\delta$, and that the Rankine-Hugoniot relations \eq{rh_1}-\eq{rh_2}, with $\dot{\mathcal R}=\xi_\ensuremath{\mathrm{s}}$, are satisfied across their common boundary along the straight line $\{r=\xi_\ensuremath{\mathrm{s}} t\}$. Applying the divergence theorem to each region we therefore have, \begin{equation}\label{part1} \Big\{\iint_{J_\delta} +\iint_{K_\delta}\Big\}\left(\rho\psi_t+\rho u\psi_r\right)\, r^mdrdt =-\delta^m\int_{-T}^T(\rho u\psi)(t,\delta)\, dt \end{equation} and \begin{equation}\label{part2} \Big\{\iint_{J_\delta}+\iint_{K_\delta}\Big\}\left(\rho u\psi_t +\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right) r^mdrdt =-\delta^m\int_{-T}^T[(\rho u^2+p)\psi](t,\delta)\, dt. \end{equation} Since the speed $u(t,r)$ under consideration is globally bounded, $\psi(t,r)$ is a bounded function, and $p\propto \rho$, it follows that to estimate these expressions, it suffices to consider the single quantity $\delta^m\int_{-T}^T \rho(t,\delta)$. We have, using \eq{rho_final} and switching to $\xi$ as integration variable, \begin{align} \delta^m\int_{-T}^T \rho(t,\delta)\, dt &= \delta^{n+\beta}\Big\{\int_{\xi_\ensuremath{\mathrm{w}}}^{-\delta/T}+\int_{-\infty}^{\xi_\ensuremath{\mathrm{w}}} +\int_{\xi_\ensuremath{\mathrm{s}}}^\infty+\int_{\delta/T}^{\xi_\ensuremath{\mathrm{s}}}\Big\}\frac{|\Omega(\xi)|}{|\xi|^{\beta+2}}\,d\xi\nonumber\\ &= \delta^{n+\beta}\Big\{\int_{\xi_\ensuremath{\mathrm{w}}}^{-\delta/T}\frac{|\hat\Omega(\xi)|}{|\xi|^{\beta+2}}\,d\xi +\int_{-\infty}^{\xi_\ensuremath{\mathrm{w}}}\frac{|\Omega_\ensuremath{\mathrm{k}}(\xi)|}{|\xi|^{\beta+2}}\,d\xi +\int_{\xi_\ensuremath{\mathrm{s}}}^\infty\frac{\tilde \Omega(\xi)}{\xi^{\beta+2}}\,d\xi +\int_{\delta/T}^{\xi_\ensuremath{\mathrm{s}}}\frac{\hat \Omega(\xi)}{\xi^{\beta+2}}\,d\xi\Big\}. \label{flux_at_delta} \end{align} According to \eq{Omega_asymp} and \eq{asymp_2}, we have, for a suitable constant $C$, \[|\Omega_\ensuremath{\mathrm{k}}(\xi)|\leq C|\xi|^\beta \quad\text{for $\xi<\xi_\ensuremath{\mathrm{w}}$, and}\quad \tilde\Omega(\xi)\leq C\xi^\beta \quad\text{for $\xi>\xi_\ensuremath{\mathrm{s}}$.}\] Also, according to the construction in Section \ref{constr_conv_div_solns}, $\hat\Omega(\xi)$ is a bounded function. Using these in \eq{flux_at_delta}, we get that \begin{align*} \delta^m\int_{-T}^T \rho(t,\delta)\, dt &\leq const. \delta^{n+\beta} \left\{ \begin{array}{ll} 1+\frac{1}{\delta^{\beta+1}} & \text{for $\beta\neq -1$}\\\\ 1+\log\delta & \text{for $\beta= -1$.} \end{array}\right. \end{align*} As $m+\beta>0$ by assumption, we conclude that \[\lim_{\delta\downarrow0}\delta^m\int_{-T}^T \rho(t,\delta)\, dt =0\] for all cases under consideration. As noted above, this implies that the integrals in \eq{part1} and \eq{part2} tend to zero as $\delta\downarrow0$. This concludes the proof that $(\rho,u)$ satisfies the weak form \eq{radial_mass_weak}-\eq{radial_mom_weak} of the radial, isothermal Euler system. We summarize our findings in the following theorem. We recall that the kink-solution $U_\ensuremath{\mathrm{k}}(\xi)$ refers to the unique solution of the similarity ODE \eq{U_ode} on $(-\infty,\xi_w)$ which approaches the critical point $(\xi_\ensuremath{\mathrm{w}},U_\ensuremath{\mathrm{w}})$ with slope $1-\lambda_+$, where $\lambda_+$ is given by \eq{lambdas}. We also recall the assumption that its limiting value $U^*$ at $\xi=-\infty$ is strictly negative (the analysis in Section \ref{constr_conv_div_solns} shows that this is a non-vacuous assumption). \begin{theorem}\label{main_result} Consider the radial, isothermal Euler system \eq{mass}-\eq{momentum} with pressure function $p=a^2\rho$ in $n=2$ or $3$ space dimensions. With $m=n-1$, choose any $\beta\in(-m,0)$ so that the limiting value $U^*$ of the kink-solution $U_\ensuremath{\mathrm{k}}(\xi)$ at $\xi=-\infty$ satisfies $U^*<0$. Then, the functions $U(\xi)$ and $\Omega(\xi)$ constructed in Section \ref{constr_conv_div_solns} yield, via \eq{fncl_relns}, a radial weak solution $(\rho(t,r),u(t,r))$ to \eq{mass}-\eq{momentum}, according to Definition \ref{rad_symm_weak_soln}. In particular, any such solution provides a weak solution $\rho(t,{\bf x}):=\rho(t,|{\bf x}|)$, ${\bf u}(t,{\bf x}):=u(t,|{\bf x}|)\frac{{\bf x}}{|{\bf x}|}$ to the original, multi-d isothermal system \eq{mass_m_d_isthrml_eul}-\eq{mom_m_d_isthrml_eul}, according to Definition \ref{weak_soln}. Finally, any such solution involves a continuous, focusing wave, followed by an expanding shock wave, and suffers amplitude blowup of its density field at the origin $(t,{\bf x})=(0,0)$, with $\rho(0,{\bf x})\propto |{\bf x}|^\beta$, while its velocity field remains globally bounded. \end{theorem} \section{Final remarks}\label{final_rmks} First, for any fixed time $t$, as $r\to\infty$ the radial speed $u(t,r)$ tends to $U^*<0$, while the density $\rho(t,r)$ tends to zero. However, the latter decay is too slow to give bounded total mass. In fact, the solutions constructed above have both unbounded total mass and unbounded total energy. E.g., the mass density $\rho(t,r)r^m$ grows like $r^{\beta +m}$ for $t$ fixed as $r\to\infty$, and the standing assumption that $\beta+m>0$ yields unbounded mass. A similar calculation shows that the total energy density \[E(t,r):=\big[ \textstyle \frac{1}{2}\rho(t,r) u(t,r)^2 +a^2\rho(t,r)\log \rho(t,r)\big]r^m,\] has unbounded integral at all times. On the other hand, as verified above, mass and energy are both locally integrable with respect to space at any fixed times. Next, consider the behavior of characteristics $\dot r=u\pm a$ and particle trajectories $\dot r=u$ in the constructed solutions. We first note that the only possibility for the path $\xi=\bar\xi$ (constant) to be a characteristic, is for $\bar \xi$ to have the value $\xi_\ensuremath{\mathrm{w}}$. This yields the ``critical,'' converging 1-characteristic through the origin. All 1-characteristics below the critical one end up along $\{r=0\}$ at negative times (with speed $-a$), while all 1-characteristics above it cross $\{t=0\}$ (all with speed $U^*-a$ and at strictly positive distances to the origin), and subsequently disappear into the reflected shock wave propagating along $r=\xi_\ensuremath{\mathrm{s}} t$. Next, all particle trajectories cross the critical characteristic from below (in the $(r,t)$-plane) and proceed to cross $\{t=0\}$ with speed $U^*$. It follows that there is no ``accumulation'' of particles at the center of motion; in particular, the trivial particle trajectory $r(t)\equiv 0$ is the unique one passing through the origin. Consequently, the density $\rho(t,r)$ does not ``contain a Dirac delta'' at time of collapse. (Solutions of ``cumulative'' type where all, or part, of the mass concentrates at the origin at some instance have been considered in \cites{kell,am}.) Finally, let $\{r=\mathfrak c(t)\}$ be any 1-characteristic above the critical 1-characteristic $\{\xi=\xi_\ensuremath{\mathrm{w}}\}$; then $\mathfrak c(0)>0$. We could now replace the constructed similarity solution on $\{r>\mathfrak c(t)\}$ with a solution (e.g., a simple wave with the same values along $\{r=\mathfrak c(t)\}$) of finite mass and energy in this outer region, without affecting the behavior of the solution within $\{r<\mathfrak c(t)\}$. This shows that the type of amplitude blowup exhibited by the original similarity solution, is possible also in solutions with finite mass and energy. \bigskip \paragraph{\bf Acknowledgment:} This work was supported in part by NSF awards DMS-1813283 (Jenssen) and DMS-1714912 (Tsikkou). \begin{bibdiv} \begin{biblist} \bib{am}{book}{ author={Atzeni, S.}, author={Meyer-ter-Vehn, J.}, title={The Physics of Inertial Fusion}, series={International Series of Monographs on Physics}, volume={125}, publisher={Oxford University Press, Oxford}, date={2004}, } \bib{cp}{article}{ author={Chen, Gui-Qiang G.}, author={Perepelitsa, Mikhail}, title={Vanishing viscosity solutions of the compressible Euler equations with spherical symmetry and large initial data}, journal={Comm. Math. Phys.}, volume={338}, date={2015}, number={2}, pages={771--800}, issn={0010-3616}, review={\MR{3351058}}, } \bib{cs}{article}{ author={Chen, Gui-Qiang G.}, author={Schrecker, Matthew R. I.}, title={Vanishing viscosity approach to the compressible Euler equations for transonic nozzle and spherically symmetric flows}, journal={Arch. Ration. Mech. Anal.}, volume={229}, date={2018}, number={3}, pages={1239--1279}, issn={0003-9527}, review={\MR{3814602}}, doi={10.1007/s00205-018-1239-z}, } \bib{cf}{book}{ author={Courant, R.}, author={Friedrichs, K. O.}, title={Supersonic flow and shock waves}, note={Reprinting of the 1948 original; Applied Mathematical Sciences, Vol. 21}, publisher={Springer-Verlag}, place={New York}, date={1976}, pages={xvi+464}, review={\MR{0421279 (54 \#9284)}}, } \bib{gud}{article}{ author={Guderley, G.}, title={Starke kugelige und zylindrische Verdichtungsst\"{o}sse in der N\"{a}he des Kugelmittelpunktes bzw. der Zylinderachse}, language={German}, journal={Luftfahrtforschung}, volume={19}, date={1942}, pages={302--311}, review={\MR{0008522}}, } \bib{hoff}{article}{ author={Hoff, David}, title={Spherically symmetric solutions of the Navier-Stokes equations for compressible, isothermal flow with large, discontinuous initial data}, journal={Indiana Univ. Math. J.}, volume={41}, date={1992}, pages={1225--1302}, } \bib{jt1}{article}{ author={Jenssen, Helge Kristian}, author={Tsikkou, Charis}, title={On similarity flows for the compressible Euler system}, journal={J. Math. Phys.}, volume={59}, date={2018}, number={12}, pages={121507, 25}, issn={0022-2488}, review={\MR{3894017}}, doi={10.1063/1.5049093}, } \bib{kell}{article}{ author={Keller, J. B.}, title={Spherical, cylindrical and one-dimensional gas flows}, journal={Quart. Appl. Math.}, volume={14}, date={1956}, pages={171--184}, } \bib{laz}{article}{ author={Lazarus, Roger B.}, title={Self-similar solutions for converging shocks and collapsing cavities}, journal={SIAM J. Numer. Anal.}, volume={18}, date={1981}, number={2}, pages={316--371}, } \bib{mmu1}{article}{ author={Makino, Tetu}, author={Mizohata, Kiyoshi}, author={Ukai, Seiji}, title={The global weak solutions of compressible Euler equation with spherical symmetry}, journal={Japan J. Indust. Appl. Math.}, volume={9}, date={1992}, number={3}, pages={431--449}, issn={0916-7005}, review={\MR{1189949}}, doi={10.1007/BF03167276}, } \bib{mmu2}{article}{ author={Makino, Tetu}, author={Mizohata, Kiyoshi}, author={Ukai, Seiji}, title={Global weak solutions of the compressible Euler equation with spherical symmetry. II}, journal={Japan J. Indust. Appl. Math.}, volume={11}, date={1994}, number={3}, pages={417--426}, issn={0916-7005}, review={\MR{1299954}}, doi={10.1007/BF03167230}, } \bib{sch}{article}{ author={Schrecker, Matthew R. I.}, title={Spherically symmetric solutions of the multi-dimensional, compressible, isentropic Euler equations}, journal={arXiv:1901.09736}, date={2019}, } \bib{sch1}{article}{ author={Schrecker, Matthew R. I.}, title={Private communication}, } \bib{zheng}{book}{ author={Zheng, Yuxi}, title={Systems of conservation laws}, series={Progress in Nonlinear Differential Equations and their Applications, 38}, note={Two-dimensional Riemann problems}, publisher={Birkh\"auser Boston Inc.}, place={Boston, MA}, date={2001}, pages={xvi+317}, isbn={0-8176-4080-0}, review={\MR{1839813 (2002e:35155)}}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2019-04-16T02:10:25", "yymm": "1904", "arxiv_id": "1904.06537", "language": "en", "url": "https://arxiv.org/abs/1904.06537" }
\section{Introduction} \subsection{Background} In their pioneering work \cite{MvN36,MvN43}, Murray and von Neumann found a natural way to associate a II$_1$ factor, denoted $L(\Gamma)$, to every countable infinite conjugacy class group $\Gamma$ and a II$_1$ factor, denoted $L^\infty(X)\rtimes\Gamma$, to any free ergodic probability measure preserving action $\Gamma\curvearrowright (X,\mu).$ The classification of these group and group measure space von Neumann algebras is in general a very difficult problem. Nevertheless, a plethora of remarkable results have been obtained in the last 15 years due to S. Popa's influential deformation/rigidity theory, see the surveys \cite{Po07,Va10a, Io12a,Io17}. A central theme is the study of tensor product decompositions. A II$_1$ factor $M$ is called {\it prime} if it cannot be decomposed as a tensor product of II$_1$ factors. The uncovering of primeness results has been initially explored in the group von Neumann algebra setting. In \cite{Po83}, S. Popa has discovered the first examples of prime II$_1$ factors by showing that the von Neumann algebra of any free group on uncountable many generators is prime. Using D. Voiculescu's free probability theory, L. Ge provided the first examples of separable prime II$_1$ factors by proving that the free group factors $L(\mathbb F_n),$ $2\leq n\leq \infty$, are also prime \cite{Ge96}. By providing new methods in the C$^*$-algebraic setting, N. Ozawa proved that any infinite conjugacy class (icc) hyperbolic group $\Gamma$ gives rise to a {\it solid} II$_1$ factor $L(\Gamma)$, meaning that the relative commutant of any diffuse subalgebra of $L(\Gamma)$ is amenable \cite{Oz03}; in particular it follows that $L(\Gamma)$ is prime. In \cite{Pe06}, by developing an innovative technique based on closable derivations, J. Peterson showed primeness of $L(\Gamma)$, for any icc non-amenable group $\Gamma$ which has positive first Betti number. S. Popa then used his deformation/rigidity theory and gave an alternative proof of solidity of $L(\mathbb F_n)$ \cite{Po06b}. The intense research activity over the last decade has resulted in many other primeness results, see \cite{Oz04, Po06a, CI08, CH08, Va10b, Bo12, HV12, DI12, CKP14, Ho15}. In all these results some negative curvature condition on $\Gamma$ is needed, in the form of a geometric assumption (e.g. $\Gamma$ is a hyperbolic group), or a cohomological assumption (e.g. the existence of a certain unbounded quasi-cocycle). Any of these two conditions can be seen as a ``rank one'' property. Concerning the primeness problem in the framework of group measure space von Neumann algebras, the techniques presented in the aforementioned papers can be used to show that any free ergodic probability measure preserving (pmp) action of such groups gives rise to a prime II$_1$ factor. Specifically, N. Ozawa showed that $L^\infty(X)\rtimes\Gamma$ is prime whenever $\Gamma\curvearrowright (X,\mu)$ is a free ergodic pmp action of a non-elementary hyperbolic group \cite{Oz04} (see also \cite{CS11}). By obtaining new Bass-Serre type rigidity results for II$_1$ factors, I. Chifan and C. Houdayer showed that the II$_1$ factor associated to any free ergodic pmp action of a free product group is prime \cite{CH08}. Then by developing methods from \cite{Si10,Va10b}, D. Hoff proved that $L^\infty(X)\rtimes\Gamma$ is prime whenever $\Gamma\curvearrowright (X,\mu)$ is a free ergodic pmp action of a group which has positive first Betti number \cite{Ho15}. \subsection{Statement of the main results} The first primeness results for group von Neumann algebras arising from icc irreducible lattices in higher rank semisimple Lie groups were obtained only recently in our joint work with D. Hoff and A. Ioana \cite{DHI16} (see also \cite{CdSS17, dSP18}). Recall that a lattice $\Gamma$ in a product $G=G_1\times\dots\times G_n$ of locally compact second countable groups is called {irreducible} if the action of $G$ on the homogeneous space $G/\Gamma$ is {\it irreducible}, meaning $G_i\curvearrowright G/\Gamma$ is ergodic for any $1\leq i\leq n.$ More generally, a pmp action $G\curvearrowright (X,\mu)$ is called {\it irreducible} if $G_i\curvearrowright (X,\mu)$ is ergodic for any $1\leq i\leq n.$ Despite all these advancements, the primeness problem for II$_1$ factors arising from arbitrary free ergodic pmp actions of groups of ``higher rank type'' is largely open. Our results aim in this direction by finding a large class of product groups for which all their irreducible actions give rise to prime II$_1$ factors, see Corollary \ref{A2}. These examples follow from our main technical result. Before stating the result, we introduce the following class of groups and explain the terminology that will be used. {\bf Class $\mathcal C$.} We say that a countable group $\Gamma$ belongs to the class $\mathcal C$ if one of the following conditions is satisfied: \begin{enumerate} \item $\Gamma$ is an icc, weakly amenable, bi-exact group (see \cite{PV11} for terminology), or \item $\Gamma=\Sigma_1*\Sigma_2$ is a free product of arbitrary groups such that $|\Sigma_1|\ge 2$ and $|\Sigma_2|\ge 3$, or \item $\Gamma=\Sigma_0\wr \Gamma_0$ is the wreath product between a non-trivial amenable group $\Sigma_0$ and a non-amenable group $\Gamma_0.$ \end{enumerate} The symbol $\prec$ stands for Popa's intertwining-by-bimodules technique (see Section \ref{cornerr}). We denote by $M^t$ the {\it amplification} of the II$_1$ factor $M$ by $t>0$ and for a pmp action $\Gamma\curvearrowright (X,\mu)$ we denote by $L^\infty(X)^{\Sigma}$ the subalgebra of elements of $L^\infty(X)$ fixed by a subgroup $\Sigma$ of $\Gamma$ (see Section \ref{term}). \begin{main}\label{A} Let $\Gamma=\Gamma_1\times\dots \times\Gamma_n$ be a product of $n\ge 2$ groups that belong to the class $\mathcal C$. Let $\Gamma\overset{}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action and denote $M=L^\infty (X)\rtimes\Gamma.$\\ Suppose that $M= P_1\bar\otimes P_2 $, for some II$_1$ factors $P_1$ and $P_2.$ Then there exists a partition $T_1\sqcup T_2=\{1,\dots ,n \}$ such that $L^\infty(X)\prec_M L^\infty(X)^{\Gamma_{T_1}}\vee L^\infty(X)^{\Gamma_{T_2}}$, where $ \Gamma_{T_i}:=\times_{j\in T_i}\Gamma_j$, for any $i\in\{1,2\}$. Moreover, if in addition the groups $\Gamma_i$'s have Kazhdan's property (T), then there exist a decomposition $M=P_1^t\bar\otimes P_2^{1/t}$, for some $t> 0$, and a unitary $u\in M$ such that $$ P_1^t=u (L^\infty(X)^{\Gamma_{T_2}}\rtimes\Gamma_{T_1})u^* \text{ and } P_2^{1/t}=u(L^\infty(X)^{\Gamma_{T_1}}\rtimes\Gamma_{T_2})u^*. $$ In particular, there exists a pmp action $\Gamma_{T_i}\curvearrowright (X_i, \mu_i)$ for any $i\in\{1,2\}$ such that $\Gamma\curvearrowright X$ is isomorphic to the product action $ \Gamma_{T_1}\times \Gamma_{T_2}\curvearrowright X_1\times X_2 . $ \end{main} The moreover part applies if the groups $\Gamma_i$'s are icc, property (T), weakly amenable, bi-exact. The following classes of groups satisfy these conditions. \begin{enumerate} \item uniform lattices in $Sp(k,1)$ with $k\ge 2$ or any icc group in their measure equivalence class, \item Gromov's random groups with density satisfying $3^{-1}<d<2^{-1}.$ \end{enumerate} Note that the moreover part of Theorem \ref{A} provides the first class of product groups for which the primeness problem for II$_1$ factors arising from their actions is completely settled. \begin{mcor}\label{A2} Let $\Gamma=\Gamma_1\times\dots \times\Gamma_n$ be a product of $n\ge 2$ groups\footnote{The case $n=1$ already follows from \cite{Oz04}, \cite{CH08} and \cite{CPS11}.} that belong to the class $\mathcal C$. Let $\Gamma\overset{}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action. If $\Gamma\curvearrowright (X,\mu)$ is irreducible, or if the groups $\Gamma_i$'s have Kazhdan's property (T) and the action $\Gamma\curvearrowright (X,\mu)$ does not admit a direct product decomposition, then $L^\infty(X)\rtimes\Gamma$ is prime. \end{mcor} Theorem \ref{A} allows us to prove a unique prime factorization theorem for any II$_1$ factor arising from an arbitrary free ergodic pmp action of a product of groups that belong to the class $\mathcal C$ and have Kazhdan's property (T). More precisely, we have: \begin{mcor}\label{mcor}\label{C} Let $\Gamma=\Gamma_1\times\dots \times\Gamma_n$ be a product of $n\ge 2$ groups that belong to the class $\mathcal C$ and have Kazhdan's property (T). Let $\Gamma\overset{}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action. Denote $M=L^\infty (X)\rtimes\Gamma.$ Then there exists a unique partition $S_1\sqcup \dots \sqcup S_k =\{1,...,n\}$, for some $1\leq k\leq n$, (up to a permutation) and a pmp action $\Gamma_{S_{i}}\curvearrowright (X_i,\mu_i)$, for any $1\leq i\leq k$, such that: \begin{enumerate} \item $\Gamma\curvearrowright X$ is isomorphic to the product action $\Gamma_{S_1}\times ...\times \Gamma_{S_k}\curvearrowright X_1\times ...\times X_k$. \item $M_i:=L^\infty(X_i)\rtimes\Gamma_{S_i}$ is prime for any $1\leq i\leq k$. \end{enumerate} Moreover, the following hold: \begin{enumerate} \item If $M=P_1\bar{\otimes}P_2$, for some II$_1$ factors $P_1, P_2$, then there exist a partition $I_1\sqcup I_2=\{1,...,k\}$ and a decomposition $M=P_1^t\bar{\otimes}P_2^{1/t}$, for some $t>0$, such that $P_1^t=\bar{\otimes}_{i\in I_1}M_i$ and $P_2^{1/t}=\bar{\otimes}_{i\in I_2}M_i$, up to unitary conjugacy in $M$. \item If $M=P_1\bar{\otimes}\dots\bar{\otimes}P_m$, for some $m\geq k$ and II$_1$ factors $P_1,...,P_m$, then $m=k$ and there exists a decomposition $M=P_1^{t_1}\bar{\otimes}...\bar{\otimes}P_k^{t_k}$ for some $t_1,...,t_k>0$ with $t_1t_2\dots t_k=1$ such that after permutation of indices and unitary conjugacy we have $M_i=P_i^{t_i}$, for all $1\leq i\leq k$. \item In (2), the assumption $m\geq k$ can be omitted if each $P_i$ is assumed to be prime. \end{enumerate} \end{mcor} The first unique prime factorization results for II$_1$ factors were obtained by N. Ozawa and S. Popa in their seminal work \cite{OP03}. Subsequently, several other unique prime factorization results have been obtained in \cite{Pe06, CS11, SW11,Is14, CKP14, HI15, Ho15, Is16, DHI16, De19}. Corollary \ref{C} is the first unique prime factorization result that applies to II$_1$ factors arising from arbitrary free ergodic pmp actions of product groups. Note that all the known unique prime factorization results in the II$_1$ factor framework are using von Neumann algebras which do not have Murray and von Neumann's {\it property Gamma} \cite{MvN43}. Another novel aspect of this paper is the following unique prime factorization theorem in which the factors possibly have property Gamma. \begin{main}\label{UPFgeneral} For any $1\leq i\leq k$, let $\Gamma_i\curvearrowright (X_i,\mu_i)$ be a free ergodic pmp action of a group $\Gamma_i$ that belongs to the class $\mathcal C$. For any $1\leq i\leq k$, denote $M_i=L^\infty(X_i)\rtimes\Gamma_i$ and let $M=M_1\bar\otimes \dots \bar\otimes M_k$. Then the following hold: \begin{enumerate} \item If $M=P_1\bar{\otimes}P_2$, for some II$_1$ factors $P_1, P_2$, then there exists a partition $I_1\sqcup I_2=\{1,...,k\}$ such that $P_j$ is stably isomorphic to $\bar{\otimes}_{i\in I_j}M_i$, for any $j\in\{1,2\}.$ \item If $M=P_1\bar{\otimes}\dots\bar{\otimes}P_m$, for some $m\geq k$ and II$_1$ factors $P_1,...,P_m$, then $m=k$ and there exists a permutation $\sigma$ of $\{1,...,k\}$ such that $P_i$ is stably isomorphic to $M_{\sigma(i)}$, for any $1\leq i\leq k.$ \item In (2), the assumption $m\geq k$ can be omitted if each $P_i$ is assumed to be prime. \end{enumerate} Moreover, if $\Gamma_i\curvearrowright (X_i,\mu_i)$ is strongly ergodic, for any $1\leq i\leq k$, then the identifications of the von Neumann algebras in (1), (2) and (3) are implemented up to amplification by a unitary from $M$ (as in Corollary \ref{C}). \end{main} \begin{remark} Assume $\Gamma_i\curvearrowright (X_i,\mu_i)$ is not strongly ergodic, for any $1\leq i\leq k$. Then, the conclusion of Theorem \ref{UPFgeneral} is optimal in the sense that it cannot be improved to deduce that the identifications in (1), (2) and (3) can be implemented up to amplification by a unitary from $M$. This follows from \cite[Theorem B]{Ho15}, since all the $M_i$'s have property Gamma. To ilustrate this, if we assume that $k=2$, \cite[Theorem B]{Ho15} implies that $M$ admits an automorphism $\theta$ such that $\theta(M_i^t)$ is not unitarily conjugate to $M_j$, for any $i,j\in\{1,2\}$ and $t>0.$ \end{remark} {\bf Comments on the proof of Theorem \ref{A}.} We end the introduction with some informal and brief comments on the proof of Theorem \ref{A}. For simplicity, assume $\Gamma=\Gamma_1\times...\times \Gamma_n$ is a product of $n\ge 2$ icc, weakly amenable, bi-exact groups. Let $\Gamma\curvearrowright (X,\mu)$ be a free ergodic pmp action and denote $M=L^\infty(X)\rtimes\Gamma.$ Assume that we have the tensor product decomposition $M=P_1\bar\otimes P_2$ into II$_1$ factors. We aim to show that $\Gamma\curvearrowright X$ admits a non-trivial direct product decomposition. In order to attain this goal we will heavily use S. Popa's deformation/rigidity theory. In the first part of the proof we use S. Popa and S. Vaes' breakthrough work \cite{PV11,PV12} to obtain a partition $T_1\sqcup T_2=\{1,\dots,n\}$ such that \begin{equation}\label{z1} P_1\prec L^\infty(X)\rtimes\Gamma_{T_1}, \text{ and } P_2\prec L^\infty(X)\rtimes\Gamma_{T_2}, \end{equation} where $\Gamma_{T_i}=\times_{j\in T_i}\Gamma_j$, for any $i\in \{1,2\}$. Here, $P\prec Q$ denotes the fact that a corner of $P$ embeds into a corner of $Q$ inside the ambient algebra in the sense of Popa \cite{Po03}. For ease of notation, we will write $P\sim Q$ if $Pp'\prec Q$ and $Qq'\prec P$, for any $p'\in P'\cap M$ and $q'\in Q'\cap M$. Since the equality $P_i\vee (P_i'\cap M)= M$ can be seen as a finite index inclusion of von Neumann algebras in the sense of Popa-Pimsner \cite{PP86} for any $i\in\{1,2\}$ (see Section \ref{S: PP}), we can make use of \eqref{z1} and deduce the existence of some abelian von Neumann subalgebras $D_1\subset P_1$ and $D_2\subset P_2$ such that \begin{equation}\label{z2} L^\infty(X)\rtimes\Gamma_{T_1}\prec P_1\bar\otimes D_2, \text{ and } L^\infty(X)\rtimes\Gamma_{T_2}\prec D_1\bar\otimes P_2, \end{equation} \begin{equation}\label{z3} D_2\sim {L^\infty(X)}^{\Gamma_{T_1}}, \text{ and } D_1\sim {L^\infty(X)}^{\Gamma_{T_2}}. \end{equation} Here, we denote by ${L^\infty(X)}^{\Gamma_{T_1}}$ and ${L^\infty(X)}^{\Gamma_{T_2}}$ the subalgebras of elements in ${L^\infty(X)}$ fixed by $\Gamma_{T_1}$ and $\Gamma_{T_2},$ respectively. By combining the intertwining relations \eqref{z2} and \eqref{z3} we show that ${L^\infty(X)}\sim {L^\infty(X)}^{\Gamma_{T_1}}\vee {L^\infty(X)}^{\Gamma_{T_2}}$. Finally, if we assume in addition that the groups $\Gamma_i$'s have property (T), we deduce that we have the identifications $$ P_1 = {L^\infty(X)}^{\Gamma_{T_2}}\rtimes{\Gamma_{T_1}}, \text{ and } P_2 = {L^\infty(X)}^{\Gamma_{T_1}}\rtimes{\Gamma_{T_2}}, $$ up to a unitary conjugacy and amplification. {\bf Acknowledgment.} I warmly thank Ionut Chifan and Adrian Ioana for many comments and suggestions that helped improve the exposition of the paper. I am especially grateful to Adrian Ioana for valuable comments on a previous draft which helped increase the generality of the results. I also thank Mart\'in Argerami and Remus Floricel for a useful discussion about these results. Finally, I would like to thank the referee for valuable comments. The author was partially supported by PIMS fellowship. \section{Preliminaries} \subsection{Terminology}\label{term} In this paper we consider {\it tracial von Neumann algebras} $(M,\tau)$, i.e. von Neumann algebras $M$ equipped with a faithful normal tracial state $\tau: M\to\mathbb C.$ This induces a norm on $M$ by the formula $\|x\|_2=\tau(x^*x)^{1/2},$ for all $x\in M$. We will always assume that $M$ is a {\it separable} von Neumann algebra, i.e. the $\|\cdot\|_2$-completion of $M$ denoted by $L^2(M)$ is separable as a Hilbert space. We denote by $\mathcal U(M)$ the {\it unitary group} of $M$ and by $\mathcal Z(M)$ its {\it center}. All inclusions $P\subset M$ of von Neumann algebras are assumed unital. We denote by $e_P: L^2(M)\to L^2(P)$ the orthogonal projection onto $L^2(P)$, by $E_{P}:M\to P$ the unique $\tau$-preserving {\it conditional expectation} from $M$ onto $P$, by $P'\cap M=\{x\in M|xy=yx, \text{ for all } y\in P\}$ the {\it relative commutant} of $P$ in $M$ and by $\mathcal N_{M}(P)=\{u\in\mathcal U(M)|uPu^*=P\}$ the {\it normalizer} of $P$ in $M$. We say that $P$ is {\it regular} in $M$ if the von Neumann algebra generated by $\mathcal N_M(P)$ equals $M$. For two von Neumann subalgebras $P,Q\subset M$, we denote by $P\vee Q$ the von Neumann algebra generated by $P$ and $Q$. {\it Jones' basic construction} of the inclusion $P\subset M$ is defined as the von Neumann subalgebra of $\mathbb B(L^2(M))$ generated by $M$ and $e_P$, and is denoted by $\langle M,e_P \rangle$. The {\it amplification} of a II$_1$ factor $(M,\tau)$ by a positive number $t$ is defined to be $M^t=p(\mathbb B(\ell^2(\mathbb Z))\bar\otimes M)p$, for a projection $p\in \mathbb B(\ell^2(\mathbb Z))\bar\otimes M$ satisfying $($Tr$\otimes\tau)(p)=t$. Here Tr denotes the usual trace on $\mathbb B(\ell^2(\mathbb Z))$. Since $M$ is a II$_1$ factor, $M^t$ is well defined. Note that if $M=P_1\bar\otimes P_2$, for some II$_1$ factors $P_1$ and $P_2$, then there exists a natural identification $M=P_1^t\bar\otimes M_2^{1/t}$, for every $t>0.$ Let $\Gamma\overset{\sigma}{\curvearrowright} A$ be a trace preserving action of a countable group $\Gamma$ on a tracial von Neumann algebra $(A,\tau)$. For a subgroup $\Sigma<\Gamma$ we denote by $A^\Sigma=\{a\in A|\sigma_g(a)=a, \text{ for all } g\in\Sigma\}$, the subalgebra of elements of $A$ fixed by $\Sigma.$ Finally, for a product group $\Gamma=\Gamma_1 \times\dots\times \Gamma_n$ and a subset $T\subset \{1,\dots,n\}$, we denote $\Gamma_T=\times _{i\in T}\Gamma_i.$ \subsection {Intertwining-by-bimodules}\label{cornerr} We next recall from \cite [Theorem 2.1 and Corollary 2.3]{Po03} the {\it intertwining-by-bimodules} technique of S. Popa, which gives a powerful criterion for the existence of intertwiners between arbitrary subalgebras of a tracial von Neumann algebra. \begin {theorem}[\!\!\cite{Po03}]\label{corner} Let $(M,\tau)$ be a tracial von Neumann algebra and let $P\subset pMp, Q\subset qMq$ be von Neumann subalgebras. Let $\mathcal G\subset\mathcal U (P)$ be a subgroup such that $\mathcal G''=P$, Then the following are equivalent: \begin{itemize} \item There exist projections $p_0\in P, q_0\in Q$, a $*$-homomorphism $\theta:p_0Pp_0\rightarrow q_0Qq_0$ and a non-zero partial isometry $v\in q_0Mp_0$ such that $\theta(x)v=vx$, for all $x\in p_0Pp_0$. \item There is no sequence $(u_n)_n\subset\mathcal G$ satisfying $\|E_Q(xu_ny)\|_2\rightarrow 0$, for all $x,y\in M$. \end{itemize} \end{theorem} If one of these equivalent conditions holds true, we write $P\prec_{M}Q$, and say that {\it a corner of $P$ embeds into $Q$ inside $M$.} If $Pp'\prec_{M}Q$ for any non-zero projection $p'\in P'\cap pMp$, then we write $P\prec^{s}_{M}Q$.\\ Whenever the ambient algebra $(M,\tau)$ is clear from the context, we will write $P\prec Q$ instead of $P\prec_{M}Q$. The following lemma is a consequence of \cite[Lemma 4.11]{OP07}. For completeness, we provide a short proof. \begin{lemma}[\!\!\cite{OP07}]\label{L:free} Let $\Gamma{\curvearrowright} (Y,\nu)$ be an ergodic pmp action of an icc group and denote $M=L^\infty(Y)\rtimes\Gamma$. If $L^\infty(Y)'\cap M\prec_M L^\infty(Y)$, then $\Gamma\curvearrowright (Y,\nu)$ is free. \end{lemma} {\it Proof.} Let $B=L^\infty(Y)$. The assumption implies that there exist non-zero projections $p\in B'\cap M, q\in B$, a non-zero partial isometry $v\in qMp$ and an injective $*$-homomorphism $\theta: p(B'\cap M)p\to Bq$ such that $\theta(x)v=vx$, for all $x\in p(B'\cap M)p$. Note that a standard argument which goes back to \cite{MvN43} shows that $M$ is a factor since $\Gamma\curvearrowright Y$ is ergodic and $\Gamma$ is icc. Since $p(B'\cap M)p$ and $Bp$ are abelian, it follows that $p(B'\cap M)p$ is maximal abelian in $pMp$. Using that $M$ is a II$_1$ factor, we obtain that there exists a maximal abelian subalgebra $C\subset M$ such that $Cp=p(B'\cap M)p$ and $p\in C.$ Hence, $C\prec_M B$. We can now apply \cite[Lemma 4.11]{OP07} and obtain the conclusion. \hfill$\blacksquare$ We continue by observing some elementary facts. The first result is well known and we include a short proof for the reader's convenience. \begin{lemma}\label{small} Let $N$ be a von Neumann subalgebra of a tracial von Neumann algebra $(M,\tau)$. Let $P\subset pNp$ and $Q\subset qNq$ be von Neumann subalgebras such that $P\prec_M Q$ and assume that $Q\subset qMq$ is regular. Then $P\prec_N Q.$ \end{lemma} {\it Proof.} Assume the contrary, that $P\nprec_N Q$. Thus, there exists a sequence of unitaries $(u_n)_n\subset \mathcal U(P)$ such that $\|E_{Q}(xu_ny)\|_2\to 0$, for any $x,y\in N.$ Thus, $\|E_{Q}(u_ny)\|_2=\|E_{Q}(u_nE_{N}(y))\|_2\to 0$, for any $y\in M.$ Since $Q$ is regular in $qMq$, we obtain that $\|E_{Q}(xu_ny)\|_2\to 0$, for any $x,y\in M$, contradiction. \hfill$\blacksquare$ \begin{lemma}\label{L: joint} Let $(M,\tau)$ be a tracial von Neumann algebra and let $Q\subset M$ be a regular von Neumann subalgebra. Let $R_1,R_2\subset M$ be commuting von Neumann subalgebras such that $R_i\prec_M^s Q$ for any $i\in\{1,2\}$. Suppose $R_2$ is abelian. Then $R_1\vee R_2\prec^s_M Q.$ \end{lemma} {\it Proof.} Take a non-zero projection $s\in (R_1\vee R_2)'\cap M$. Since $R_1s\prec_M Q, $ there exist projections $r_1\in R_1$ and $q\in Q, $ a $*$-homomorphism $\theta:r_1R_1r_1s\to qQq$ and a non-zero partial isometry $v\in qMr_1s$, satisfying \begin{equation}\label{111} \theta(x)v=vx, \text{ for all } x\in r_1R_1r_1s. \end{equation} We argue that $(r_1R_1r_1s)\vee (R_2r_1s)\prec_M Q.$ By supposing the contrary, there exist two sequences of unitaries $(u_n)_n\subset \mathcal U(r_1R_1r_1s)$ and $(v_n)_n\subset \mathcal U(R_2)$ such that $$ \|E_{Q}(xu_n (v_nr_1s)y)\|_2\to 0, \text{ for all } x,y\in M. $$ Note that $u_nr_1s=u_n$, for all $n$. By taking $x=v$ and using the intertwining relation \eqref{111}, we get that $$ \|E_{Q}(\theta(u_n)v v_n y)\|_2=\|E_{Q}(v v_n y)\|_2\to 0, \text{ for all } y\in M. $$ Let $r:=v^*v.$ Since $Q$ is regular, we get that \begin{equation}\label{zz} \|E_{Q}(xr v_n y)\|_2\to 0, \text{ for all } x,y\in M. \end{equation} Denote $r':=\vee_{w\in\mathcal U(R_2)}wrw^*\in R_2'\cap M$. Since $R_2$ is abelian, \eqref{zz} implies that $\|E_{Q}(x(wrw^*) v_n y)\|_2\to 0,$ for any $w\in\mathcal U(R_2)$ and $x,y\in M.$ Note that $p_1\vee p_2=s(p_1+p_2)$, for any two projections $p_1$ and $p_2$ in $M$. Here we denote by $s(b)$ the support projection of a positive element $b\in M.$ Moreover, by using Borel functional calculus, there exist a sequence $(c_n)_n\subset M$ such that $c_n(p_1+p_2)$ converges to $s(p_1+p_2)$ in the $\|\cdot\|_2$-norm. Therefore, it follows that $\|E_{Q}(x (w_1rw_1^*+w_2rw_2^*) v_n y)\|_2\to 0$, and hence, $\|E_{Q}(x (w_1rw_1^*\vee w_2rw_2^*) v_n y)\|_2\to 0$, for any $w_1,w_2\in \mathcal U(R_2)$ and $x,y\in M$. Finally, by induction it follows that $$ \|E_{Q}(x(w_1rw_1^*\vee...\vee w_mrw_m^*) v_n y)\|_2\to 0, $$ for any $w_1,...,w_m\in\mathcal U(R_2)$ and $x,y\in M$. \\ Hence, $\|E_{Q}(xr'v_ny)\|_2\to 0$, for all $x,y\in M$. This shows that $R_2r'\nprec_M Q,$ contradiction. Therefore, $(R_1\vee R_2)s\prec_M Q$, which implies that $R_1\vee R_2\prec_M^s Q.$ \hfill$\blacksquare$ We will need the following result which is an extension of \cite[Lemma 2.8(2)]{DHI16} (see also \cite[Proposition 2.7]{PV11}). \begin{proposition}\label{L: PV} Let $(M,\tau )$ be a tracial von Neumann algebra and let $Q_1,Q_2\subset M$ be von Neumann subalgebras which form a commuting square, i.e. $E_{Q_1}\circ E_{Q_2}=E_{Q_2}\circ E_{Q_1}$. Assume that there exist commuting subgroups $\mathcal N_1< \mathcal N_M(Q_1)$ and $\mathcal N_2< \mathcal N_M(Q_2)$ satisfying $(\mathcal N_1\vee\mathcal N_2)''=M$. Let $P\subset pMp$ be a von Neumann subalgebra. If $P\prec_M^s Q_1$ and $P\prec_M^s Q_2$, then $P\prec_M^s Q_1\cap Q_2.$ \end{proposition} \begin{remark} The proposition will be applied for the following particular case. Assume $M=P_1\bar\otimes P_2$ for some von Neumann subalgebras $P_1,P_2\subset M$ and let $D_i\subset P_i$ be a subalgebra for any $i\in\{1,2\}.$ By taking $Q_1=D_1\bar\otimes P_2$ and $Q_2=P_1\bar\otimes D_2$ the assumptions of Proposition \ref{L: PV} are satisfied. \end{remark} The proof of Proposition \ref{L: PV} follows directly by using the next lemma and adapting the proof of \cite[Lemma 2.8(2)]{DHI16} (see also \cite[Proposition 2.7]{PV11}). We leave the other details to the reader. \begin{lemma} Let $(M,\tau)$ be a tracial von Neumann algebra and let $Q_1,Q_2\subset M$ and $\mathcal N_1,\mathcal N_2\subset M$ be as in Proposition \ref{L: PV}. Denote $Q=Q_1\cap Q_2.$ Then the $M$-$M$-bimodule $L^2(\langle M,e_{Q_1} \rangle)\otimes_M L^2(\langle M,e_{Q_2} \rangle)$ is contained in a multiple of the $M$-$M$-bimodule $L^2(\langle M,e_{Q}\rangle)$. \end{lemma} {\it Proof.} The proof follows almost verbatim part of the proof of \cite[Proposition 2.7]{PV11}. However, we provide some details for the reader's convenience. Denote by $H$ the $M$-$M$-bimodule $L^2(\langle M,e_{Q_1} \rangle)\otimes_M L^2(\langle M,e_{Q_2} \rangle)$. For $u_1,v_1\in \mathcal N_1$ and $u_2,v_2\in\mathcal N_2$, denote by $H_{u_1,v_1}^{u_2,v_2}$ the closed linear span of $\{xe_{Q_1}u_1u_2\otimes_M v_1v_2e_{Q_2}y|x,y\in M\}.$ Note that the formulas $u E_{Q_i}(\cdot)u^*=E_{Q_i}(u\cdot u^*)$, for any $u\in\mathcal N_M(Q_i)$, combined with the commuting square property imply that the map $$ x{e_{Q_1}}u_1u_2\otimes_M v_1v_2e_{Q_2}y\to xu_1v_1\otimes_Q u_2v_2y $$ defines an $M$-$M$-bimodular unitary operator of $H_{u_1,v_1}^{u_2,v_2}$ to $L^2(\langle M,e_{Q}\rangle)$. To show this it suffices to verify that \begin{equation}\label{formula} \langle {xe_{Q_1}}u_1u_2\otimes_M v_1v_2e_{Q_2}y, {e_{Q_1}}u_1u_2\otimes_M v_1v_2e_{Q_2} \rangle=\langle x u_1v_1\otimes_Q u_2v_2y,u_1v_1\otimes_Q u_2v_2 \rangle, \end{equation} for all $x,y\in M$. Note that the left hand side of \eqref{formula} equals $$ \begin{array}{rcl} \langle {xe_{Q_1}}u_1u_2\otimes_M v_1v_2e_{Q_2}y, {e_{Q_1}}u_1u_2\otimes_M v_1v_2e_{Q_2} \rangle &=& \tau(E_{Q_2}(v_2^*v_1^* E_M(u_2^*u_1^*E_{Q_1}(x)u_1u_2) v_1v_2)y)\\ &=& \tau(v_2^* u_2^*E_{Q_2}(v_1^* u_1^*E_{Q_1}(x)u_1 v_1)u_2 v_2y) \end{array} $$ Therefore, the right hand side or \eqref{formula} equals its left hand side since $$ \langle x u_1v_1\otimes_Q u_2v_2y,u_1v_1\otimes_Q u_2v_2 \rangle = \tau(v_2^*u_2^*E_{Q_2}(E_{Q_1}(v_1^*u_1^*xu_1v_1))u_2v_2y). $$ Remark that the regularity assumption on the $Q_i$'s implies that the closed linear span of $\{u_1u_2|u_1\in\mathcal N_1, u_2\in\mathcal N_2\}$ equals to $L^2(M)$. Therefore, the closed linear span of $\{H_{u_1,v_1}^{u_2,v_2}|$ $u_1,v_1\in \mathcal N_1,u_2,v_2\in \mathcal N_2\} $ equals to $H$. This shows that $H$ is contained in a multiple of $L^2(\langle M,e_Q\rangle)$. \hfill$\blacksquare$ \subsection{Finite index inclusions of von Neumann algebras.}\label{S: PP} For an inclusion $P\subset M$ of II$_1$ factors the {\it Jones index} is the dimension of $L^2(M)$ as a left $P$-module \cite{Jo81}. In \cite{PP86}, M. Pimsner and S. Popa defined a probabilistic notion of index for an inclusion $P\subset M$ of arbitrary von Neumann algebras with conditional expectation, which in the case of inclusions of II$_1$ factors coincides with Jones' index. Following \cite{PP86}, we say that the inclusion $P\subset M$ of tracial von Neumann algebras has {\it probabilistic index} $[M:P]=\lambda^{-1}$, where $$ \lambda= \text{inf}\{\|E_P(x)\|^2_2{\|x\|_2^{-2}}|x\in M_+, x\neq 0\}. $$ Here we use the convention that $\frac{1}{0}=\infty.$ \begin{lemma}[\!\!{\cite[Lemma 2.3]{PP86}}]\label{PP} Let $P\subset M$ be an inclusion of tracial von Neumann algebras such that $[M:P]<\infty$. Then the following hold: \begin{enumerate} \item If $p\in P$ is a projection, then $[pMp: pPp]<\infty.$ \item $M\prec^s_M P.$ \end{enumerate} \end{lemma} For a proof, see \cite[Lemma 2.4]{CIK13}. We will need the following well known lemma and we include its proof for completeness (see also \cite[Lemma 3.9]{Va08}). \begin{lemma}\label{L: center}\label{L: fi} Let $(M,\tau)$ be a tracial von Neumann algebra and let $R\subset N\subset pMp$ be von Neumann subalgebras such that $[N:R]<\infty$. Then the following hold: \begin{enumerate} \item If $R'\cap N\subset R$, then there exists a non-zero projection $z\in\mathcal Z(R)$ such that $\mathcal Z(N)z=\mathcal Z(R)z$. \item Assume $R'\cap N\subset R$ or $\mathcal Z(R)$ is completely atomic. If $Q\subset qMq$ is a von Neumann subalgebra such that $R\prec_{M} Q$, then $N\prec_{M} Q.$ \end{enumerate} \end{lemma} {\it Proof.} (1) Applying Lemma \ref{PP}(2), we get that $N\prec_N R$. By passing to relative commutants, we can use \cite[Lemma 3.5]{Va08} and deduce that $\mathcal Z(R)\prec_N \mathcal Z(N).$ Hence, we obtain that there exist projections $r\in \mathcal Z(R), n\in\mathcal Z(N)$, a non-zero partial isometry $v\in nNr$ and a $*$-homomorphism $\theta : \mathcal Z (R)r\to \mathcal Z(N)n$ such that $v\theta(x)=\theta(x)v=vx$, for all $x\in \mathcal Z (R)r.$ By noticing that $\mathcal Z(N)\subset \mathcal Z(R)$, we obtain that $E_{\mathcal Z(R)}(v^*v)\theta(x)=E_{\mathcal Z(R)}(v^*v) x$, for all $x\in \mathcal Z(R)r.$ If we denote by $p_0$ the support projection of $E_{\mathcal Z(R)}(v^*v)$, we get that $p_0\theta(x)=p_0x$, for all $x\in \mathcal Z(R)r.$ Therefore, $\mathcal Z(R)\prec_{\mathcal Z(R)} \mathcal Z(N)$, which clearly implies the conclusion. (2) Assume first that $R'\cap N\subset R$. Since $R\prec_M Q$, there exist projections $r\in R$, $q_0\in Q$, a non-zero partial isometry $v\in q_0Mr$ and a $*$-homomorphism $\theta: rRr\to q_0Qq_0$ such that $$ \theta(x)v=vx, \text{ for all } x\in rRr. $$ Note that $v^*v\in (R'\cap pMp)r$ and denote by $r_1$ the support projection of $E_{N}(v^*v)$. Notice that $r_1\in (R'\cap N)r.$ Since $R'\cap N\subset R$, then $r_1\in rRr$ and therefore, Lemma \ref{PP} implies that $r_1Nr_1\prec_{r_1Nr_1} r_1Rr_1.$ Thus, there exist projections $n\in r_1Nr_1$, $r_0\in r_1Rr_1$, a non-zero partial isometry $w\in r_0Nn$ and a $*$-homomorphism $\psi: nNn\to r_0Rr_0$ such that $$ \psi(x)w=wx, \text{ for all } x\in nNn. $$ Moreover, by restricting $ww^*$ if necessary we can assume without loss of generality that the support projection of $E_{r_1Rr_1}(ww^*)$ equals $r_0.$ Note that $\theta(\psi(\cdot)): nNn\to q_0Qq_0$ is a $*$-homomorphism which satisfies \begin{equation}\label{c} \theta(\psi(x))vw=vwx, \text{ for all } x\in nNn. \end{equation} If $vw=0$, then $E_N(v^*v)ww^*=0$, which implies $r_1ww^*=0.$ Hence, $r_1E_{r_1Rr_1}(ww^*)=0$, showing that $r_0=0$, contradiction. This proves that $vw\neq 0.$ By replacing $vw$ by the partial isometry from its polar decomposition, the intertwining relation \eqref{c} still holds. This shows that $N\prec_M Q.$ Assume now that $\mathcal Z(R)$ is completely atomic. Note that \cite[Lemma 2.11 and Lemma 2.4(2)]{Dr19} give that $N\prec_M Q.$ \hfill$\blacksquare$ \subsection{Relative amenability} A tracial von Neumann algebra $(M,\tau)$ is {\it amenable} if there exists a positive linear functional $\Phi:\mathbb B(L^2(M))\to\mathbb C$ such that $\Phi_{|M}=\tau$ and $\Phi$ is $M$-{\it central}, meaning $\Phi(xT)=\Phi(Tx),$ for all $x\in M$ and $T\in \mathbb B(L^2(M))$. The famous theorem of A. Connes asserts that a von Neumann algebra $M$ is amenable if and only if it is approximately finite dimensional \cite{Co76}. N. Ozawa and S. Popa have considered a very useful relative version of this notion\cite{OP07}. Let $(M,\tau)$ be a tracial von Neumann algebra. Let $p\in M$ be a projection and $P\subset pMp,Q\subset M$ be von Neumann subalgebras. Following \cite[Definition 2.2]{OP07}, we say that $P\subset pMp$ is {\it amenable relative to $Q$ inside $M$} if there exists a positive linear functional $\Phi:p\langle M,e_Q\rangle p\to\mathbb C$ such that $\Phi_{|pMp}=\tau$ and $\Phi$ is $P$-central. Note that $P$ is amenable relative to $\mathbb C$ inside $M$ if and only if $P$ is amenable. The following lemma is well known and it goes back to \cite[Lemma 10.2]{IPV10}, but we include a proof for completeness. The arguments are essentially contained in the proof of \cite[Proposition 3.2]{PV12}. \begin{lemma}\label{ipv} Let $\Gamma\curvearrowright (B,\tau)$ be a trace preserving action and denote $M=B\rtimes\Gamma.$ Define the $*$-homomorphism $\Delta:M\to M\bar\otimes L(\Gamma)$ by letting $\Delta(bu_g)=bu_g\otimes u_g,$ for all $b\in B$ and $g\in\Gamma.$\\ Let $P\subset pMp$ be a von Neumann subalgebra such that there exists $p_1\in \Delta(P)'\cap \Delta(p)(M\bar\otimes M)\Delta(p)$ with the property that $\Delta(P)p_1$ is amenable relative to $M\otimes 1$. Then there exists a non-zero projection $p_0\in P'\cap pMp$ such that $Pp_0$ is amenable relative to $B$ inside $M$. \end{lemma} {\it Proof.} Define $\mathcal M=M\bar\otimes M.$ The assumption implies the existence of a positive linear functional $\Phi:p_1\langle \mathcal M, e_{M\otimes 1} \rangle p_1\to \mathbb C$ such that the restriction of $\Phi$ to $p_1\mathcal Mp_1$ equals the trace on $p_1\mathcal Mp_1$ and $\Phi$ is $\Delta(P)p_1$-central. Since $E_{M\bar\otimes 1}\circ\Delta=\Delta\circ E_B$, note that we can define the injective $*$-homomorphism $\Delta_1: \langle M,e_B \rangle\to \langle \mathcal M, e_{M\otimes 1} \rangle$ by letting $\Delta_{1}(e_B)=e_{M\otimes 1}$ and $\Delta_{1}(x)=\Delta(x)$, for all $x\in M$. Define the positive linear functional $\Psi: p\langle M,e_B\rangle p\to\mathbb C$ by $\Psi(x)=\Phi(p_1\Delta_1(x)p_1)$, for all $x\in p\langle M,e_B\rangle p.$ Note that $\Psi$ is $P$-central and its restriction to $pMp$ is normal. Therefore, \cite[Lemma 2.9]{BV12} implies that there exists a non-zero projection $p_0\in P'\cap pMp$ such that $Pp_0$ is amenable relative to $B$. \hfill$\blacksquare$ \subsection{Relatively strongly solid groups} Following \cite[Definition 2.7]{CIK13}, a countable group $\Gamma$ is said to be {\it relatively strongly solid} and write $\Gamma\in \mathcal C _{rss}$ if for any trace preserving action $\Gamma\curvearrowright B$ the following holds: if $M=B\rtimes\Gamma$ and $A\subset pMp$ is a von Neumann algebra which is amenable relative to $B$, then either $A\prec_M B$ or the normalizer $\mathcal N_{pMp}(A)''$ is amenable relative to $B$. In their breakthrough work \cite{PV11,PV12}, S. Popa and S. Vaes proved that all non-elementary hyperbolic groups belong to $\mathcal C_{rss}.$ More generally, [PV12, Theorem 1.4] shows that all weakly amenable, bi-exact groups are relatively strongly solid. A remarkable subsequent development has been made by A. Ioana \cite{Io12} (see also \cite{Va13}) in the context of amalgamated free products by classifying all subalgebras of $M=M_1*_B M_2$ that are amenable relative to $B$ and that satisfy a certain spectral gap condition. We will make use of the following consequence for groups that belong to $\mathcal C_{\text{rss}}$ (see \cite[Lemma 5.2]{KV15}). \begin{lemma}[\!\!{\cite{KV15}}]\label{L: rss} Let $\Gamma\curvearrowright B$ be a trace preserving action of a group $\Gamma\in \mathcal C_{\text{rss}}$. Denote $M=B\rtimes\Gamma$. Let $P_1, P_2\subset pM$p be commuting von Neumann subalgebras. Then either $P_1\prec_{M}B$ or $P_2$ is amenable relative to $B$ inside $M$. \end{lemma} For free product groups we will use the following consequence \cite[Theorem 3.1]{CdSS17} of \cite[Theorem A]{Va13}: \begin{lemma}[\!\!{\cite{CdSS17}}]\label{L: amalgam} Let $\Gamma\curvearrowright B$ be a trace preserving action, where $\Gamma=\Sigma_1*\Sigma_2$ and $|\Sigma_1|\ge 2$ and $|\Sigma_2|\ge 3$. Denote $M=B\rtimes\Gamma$ and assume $P_1,P_2\subset pMp$ are two commuting diffuse subalgebras such that $P_1\vee P_2\subset pMp$ has finite index. Then there exists an $i\in\{1,2\}$ such that $P_i\prec_M B$. \end{lemma} \subsection{Wreath product groups} The next lemma gives a dichotomy result for commuting subalgebras of von Neumann algebras arising from trace preserving actions of wreath product groups. The arguments rely heavily on \cite{IPV10}. \begin{lemma}\label{wreath} Let $\Gamma=\Sigma_0\wr\Gamma_0$ be the wreath product between a non-trivial amenable group $\Sigma_0$ and an infinite group $\Gamma_0$. Let $\Gamma\curvearrowright B$ be a trace preserving action and define $M=B\rtimes\Gamma.$ Let $P_1,P_2\subset pMp$ be two commuting subalgebras such that $P_1\vee P_2\subset pMp$ has finite index. Then there exists a non-zero projection $p_0\in P_1'\cap pMp$ such that $P_1p_0$ is amenable relative to $B$ or $P_2\prec_M B.$ \end{lemma} {\it Proof.} Let $\Delta: M\to M\bar\otimes L(\Gamma)$ be the $*$-homomorphism defined by $\Delta(bu_g)=bu_g\otimes u_g$, for all $b\in B$ and $g\in\Gamma.$ By applying \cite[Corollary 4.3]{IPV10}, one of the following possibilities occurs: (1) there exists a non-zero projection $p_1\in \Delta(P_1)'\cap \Delta(p)(M\bar\otimes M)\Delta(p)$ such that $\Delta(P_1)p_1$ is amenable relative to $M\otimes 1$, or (2) $\Delta(P_2)\prec M\otimes 1$, or (3) $\Delta(P_1\vee (P_1'\cap pMp))\prec M\bar\otimes L(\Sigma_0^{(\Gamma_0)})$, or (4) $\Delta(P_1\vee (P_1'\cap pMp))\prec M\bar\otimes L(\Gamma_0).$ If (1) holds, by Lemma \ref{ipv} there exists a non-zero projection $p_0\in P_1'\cap pMp$ such that $P_1p_0$ is amenable relative to $B$. If (2) holds, \cite[Lemma 2.9(1)]{Io10} implies that $P_2\prec_M B.$ We end the proof by showing that (3) and (4) cannot hold. Indeed, if (3) or (4) is true, then Lemma \ref{L: fi} combined with \cite[Lemma 9.2(4)]{Io10} imply that $L(\Gamma)\prec_M L(\Sigma_0^{(\Gamma_0)})$ or $L(\Gamma)\prec_M L(\Gamma_0).$ Both are in contradiction with the fact that $\Sigma_0$ and $\Gamma_0$ are infinite groups. \hfill$\blacksquare$ Combining the previous three lemmas with \cite[Lemma 2.6(2)]{DHI16}, we obtain the following corollary. \begin{corollary}\label{all} Let $\Gamma$ be a group that belongs to the class $\mathcal C$. Let $\Gamma\curvearrowright B$ be a trace preserving action and define $M=B\rtimes\Gamma.$ Let $P_1,P_2\subset pMp$ be two commuting diffuse subalgebras such that $P_1\vee P_2\subset pMp$ has finite index. Then there exists a non-zero projection $p_0\in \mathcal N_{pMp}(P_1)'\cap pMp$ such that $P_1p_0$ is amenable relative to $B$ or $P_2\prec_M B.$ \end{corollary} \section{From tensor decompositions of II$_1$ factors to decompositions of actions} The goal of this section is to prove Theorem \ref{AA} which is our main ingredient of the proof of Theorem \ref{A}. The moreover part will provide a von Neumann algebraic criterion for pmp actions of product groups to admit a direct product decomposition. First, we need the following result. \begin{theorem}\label{Th:split} Let $\Gamma=\Gamma_1\times\Gamma_2$ be a product of countable icc groups and let $\Gamma\overset{}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action. Denote $M=L^\infty(X)\rtimes\Gamma.$ Suppose that $M= P_1\bar\otimes P_2 $ for some II$_1$ factors $P_1$ and $P_2$ such that $P_i\prec_ M^s L^\infty(X)\rtimes\Gamma_i$, for all $i\in\{1,2\}.$ The following hold: \begin{enumerate} \item If $L^\infty(X)\prec_{M}L^\infty(X)^{\Gamma_1}\vee L^\infty(X)^{\Gamma_2}$ and $\Gamma_1$ has property (T), then there exist a decomposition $M=P_1^t\bar\otimes P_2^{1/t}$, for some $t> 0$, and a unitary $u\in M$ such that $ P_1^t=u (L^\infty(X)^{\Gamma_{2}}\rtimes\Gamma_{1})u^* \text{ and } P_2^{1/t}=u(L^\infty(X)^{\Gamma_{1}}\rtimes\Gamma_{2})u^*. $ \item If $L^\infty(X)=L^\infty(X)^{\Gamma_1}\vee L^\infty(X)^{\Gamma_2}$, then $P_1$ is stably isomorphic to $L^\infty(X)^{\Gamma_2}\rtimes\Gamma_1$ and $P_2$ is stably isomorphic to $L^\infty(X)^{\Gamma_1}\rtimes\Gamma_2.$ \end{enumerate} \end{theorem} {\it Proof.} Let $\{u_g\}_{g\in\Gamma}$ be the canonical unitaries in $M$ implementing the action $\Gamma\curvearrowright (X,\mu)$. Denote $A=L^\infty(X)$, $A_1=L^\infty(X)^{\Gamma_2}, A_2=L^\infty(X)^{\Gamma_1}$, $B=A_1\vee A_2$ and $M_i=A_i\rtimes\Gamma_i$, for any $i\in\{1,2\}$. Since $B$ is a $\Gamma$-invariant subalgebra of $A$, we consider $\Gamma\overset {\rho}{\curvearrowright} B$ the natural action of $\Gamma$ on $B$. Note that \cite[Lemma 3.5]{Va08} shows that $B'\cap (B\rtimes\Gamma)\prec_M A$, which implies by \cite[Lemma 3.7]{Va08} that $B'\cap (B\rtimes\Gamma)\prec_M B.$ Since $B\subset M$ is regular, Lemma \ref{small} shows that $B'\cap (B\rtimes\Gamma)\prec_{B\rtimes\Gamma} B.$ An application of Lemma \ref{L:free} gives us that $\Gamma\curvearrowright B$ is free. Since $\Gamma\overset{\rho}{\curvearrowright} B$ is isomorphic to the product action $\Gamma_1\times\Gamma_2 \curvearrowright A_1\bar\otimes A_2$, it follows that $\Gamma_i\curvearrowright A_i$ is free for any $i\in\{1,2\}$. Hence, \begin{equation}\label{fr} A_2'\cap M=A\rtimes\Gamma_1. \end{equation} Indeed, take $x=\sum_{g\in\Gamma}x_gu_g\in A_2'\cap M.$ It follows that $x_gb=x_g\rho_g(b)$, for all $b\in A_2.$ Therefore, $E_{A_2}(x_g^*x_g)b=E_{A_2}(x_g^*x_g)\rho_g(b)$, for all $b\in A_2$. Since $\Gamma_2$ acts freely on $A_2$, we get that $E_{A_2}(x_g^*x_g)=0$, for all $g\notin \Gamma_1$, which implies that $x_g=0$. Note that $P_1\prec_M A\rtimes\Gamma_{1}$ implies $A_2\prec_M P_2$ by applying \cite[Lemma 3.5]{Va08}. Since $A_2\subset M$ is regular, we can apply \cite[Corollary 1.3]{Io06} (see also \cite[Proposition 12]{OP07}) and obtain that there exist a unitary $u_0\in\mathcal U(M)$ and a decomposition $M=P_1^{1/t_{0}}\bar\otimes P_2^{t_0}$, for some $t_0>0$ such that $u_0A_2u_0^*\subset P_2^{t_0}.$ Denote $Q_1=u_0^*P_1^{1/t_0}u_0$ and $Q_2=u_0^*P_2^{t_0}u_0.$ Since $A_2\subset Q_2$ we can apply Ge's tensor splitting theorem \cite[Theorem A]{Ge95} and derive that \begin{equation}\label{a1} A\rtimes\Gamma_1=Q_1\bar\otimes (A_2'\cap Q_2). \end{equation} Since $A_2'\cap Q_2\subset A\rtimes\Gamma_1$ and $Q_2\prec_M A\rtimes\Gamma_2$, it follows that $A_2'\cap Q_2\prec_M A.$ Thus, there exists a non-zero projection $a\in A_2'\cap Q_2$ such that $a(A_2'\cap Q_2)a$ is abelian. By cutting the von Neumann algebras of the equality \eqref{a1} by the projection $a$ and by passing to the center, we obtain that $A_2a=a(A_2'\cap Q_2)a$ and hence \begin{equation}\label{aa1} a(A\rtimes\Gamma_1)a=Q_1a\bar\otimes A_2a. \end{equation} (1) Now we can prove the first conclusion of the theorem. We first show that $L(\Gamma_1)\prec_M P_1.$ To see this, note that since $L(\Gamma_1)$ is a II$_1$ factor, we obtain by \cite[Lemma 4.5]{CdSS17} that there exists a unitary $u\in A\rtimes\Gamma_1$ such that $b:=uau^*\in L(\Gamma_1).$ Hence, relation \eqref{aa1} gives that $b(A\rtimes\Gamma_1)b=uQ_1u^*b\bar\otimes uA_2u^*b.$ Note that $bL(\Gamma_1)b$ is a von Neumann algebra with property (T) by \cite[Proposition 4.7(2)]{Po01} and $ u(A_2'\cap Q_2)u^*b$ is an abelian von Neumann algebra. Therefore, we obtain that $bL(\Gamma_1)b\prec_{b(A\rtimes\Gamma_1)b} uQ_1u^*b$ (see e.g. \cite[Lemma 1]{HPV10}). This implies that $L(\Gamma_1)\prec_M P_1.$ By passing to relative commutants and by applying twice\cite[Lemma 3.5]{Va08}, we obtain that $P_2\prec_M M_2$ and hence $M_1\prec_M P_1.$ Since $M_1$ and $M_1'\cap M=M_2$ are factors, we can apply \cite[Proposition 12]{OP03} and obtain that there exist a unitary $v\in M$ and a decomposition $M=P_1^s\bar\otimes P_2^{1/s}$ for a positive $s>0$ such that $vM_1v^*\subset P_1^s.$ Therefore, $P_2^{1/s}\subset vM_2v^*.$ By applying \cite[Theorem A]{Ge95}, we obtain that there exists a factor $D\subset P_1^s$ such that $D\bar\otimes P_2^{1/s}=vM_2v^*.$ It is easy to see that $D$ is not diffuse, which implies that $D=\mathbb M_k(\mathbb C) $ for some integer $k\ge 1.$ By denoting $s_0=s/k$, we deduce that $P_2^{1/{s_0}}=vM_2v^*$ and $P_1^{s_0}=vM_1v^*.$ (2) Note that the assumption implies $A=A_1\bar\otimes A_2$ and the relation \eqref{aa1} shows that $a(M_1\bar\otimes A_2)a=Q_1a\bar\otimes A_2a$. Therefore, by disintegrating in the above equality over the center $A_2a$ and by using \cite[Theorem IV.8.23]{Ta01}, we deduce that $M_1$ is stably isomorphic to $Q_1$, hence to $P_1$. In a similar way, we obtain that $M_2$ is stably isomorphic to $P_2$. \hfill$\blacksquare$ \begin{theorem}\label{AA} Let $\Gamma=\Gamma_1\times\Gamma_2$ be a product of countable icc groups and let $\Gamma\overset{\sigma}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action. Suppose that $M=L^\infty(X)\rtimes\Gamma= P_1\bar\otimes P_2 $ for some II$_1$ factors $P_1$ and $P_2$ such that $P_i\prec_ M^s L^\infty(X)\rtimes\Gamma_i$, for all $i\in\{1,2\}.$ Then $L^\infty(X)\prec_M L^\infty(X)^{\Gamma_1}\vee L^\infty(X)^{\Gamma_2}.$ Moreover, assume that $\Gamma_1$ has property (T). Then there exist a decomposition $M=P_1^t\bar\otimes P_2^{1/t}$, for some $t> 0$, and a unitary $u\in M$ such that $$ P_1^t=u (L^\infty(X)^{\Gamma_{2}}\rtimes\Gamma_{1})u^* \text{ and } P_2^{1/t}=u(L^\infty(X)^{\Gamma_{1}}\rtimes\Gamma_{2})u^*. $$ In particular, there exists a pmp action $\Gamma_{i}\curvearrowright (X_i, \mu_i)$ for any $i\in\{1,2\}$ such that the actions $\Gamma\curvearrowright X $ and $ \Gamma_{1}\times \Gamma_{2}\curvearrowright X_1\times X_2 $ are isomorphic. \end{theorem} {\it Proof.} Let $\{u_g\}_{g\in\Gamma}$ be the canonical unitaries in $M$ implementing the action $\Gamma\curvearrowright (X,\mu)$ and denote $A=L^\infty(X)$. For any $i\in\{1,2\}$, \cite[Proposition 2.4]{CKP14} implies that there exist non-zero projections $p_i\in P_i$, $q_i\in A\rtimes\Gamma_{i}$, a subalgebra $Q_i\subset q_i(A\rtimes\Gamma_{i})q_i$, a partial isometry $v_i\in q_iMp_i$ and a $*$-isomorphism $\theta_i: p_iP_ip_i\to Q_i$ such that: \begin{equation}\label{s1} Q_i\vee (Q_i'\cap q_i(A\rtimes\Gamma_{i})q_i)\subset q_i(A\rtimes\Gamma_{i})q_i \text{ has finite index, } \end{equation} \begin{equation}\label{s2} \theta_i(x)v_i=v_ix, \text{ for all } x\in p_iP_ip_i, \text{ and } \end{equation} \begin{equation}\label{s12} E_{A\rtimes\Gamma_{i}}(v_iv_i^*)q_i\ge \lambda_i q_i, \text{ for some positive number } \lambda_i. \end{equation} The rest of the proof is divided between four claims. {\bf Claim 1.} We can assume, in addition to \eqref{s1}-\eqref{s12}, that $Q_i$ also satisfies that $Q_i'\cap q_i(A\rtimes\Gamma_{i})q_i=A^{\Gamma_{i}}q_i$, for any $i\in\{1,2\}.$ {\it Proof.} For simplicity we prove the claim only for $i=1$. Denote $R=Q_1'\cap q_1(A\rtimes\Gamma_{1})q_1$. First, note that $R\prec_M A.$ Indeed, on one hand since $P_1\prec_M Q_1$, by considering the relative commutants, \cite[Lemma 3.5]{Va08} implies that $R\prec_M P_2.$ Applying \cite[Lemma 3.7]{Va08} we get $R\prec_M A\rtimes{\Gamma_{2}}$. On the other hand, $R\subset q_1(A\rtimes\Gamma_{1})q_1$. Hence, one can check that we actually have $R\prec_M A.$ Therefore, there exists a non-zero projection $r\in R$ such that $rRr$ is abelian. Applying Lemma \ref{PP}(1), we have that $Q_1r\vee rRr\subset r(A\rtimes\Gamma_{1})r$ has finite index. Since $Q_1$ is a factor, Lemma \ref{L: center}(1) shows that by replacing $r$ by a smaller projection in $R$, we can assume that $$rRr=\mathcal Z(rRr)=\mathcal Z(r(A\rtimes{\Gamma_{1}})r)=A^{\Gamma_{1}}r. $$ Lemma \ref{PP}(1) guarantees that $Q_1r\vee rRr\subset r(A\rtimes{\Gamma_{1}})r$ still has finite index. Since $E_{A\rtimes\Gamma_{1}}(v_ 1v_1^*)q_1\ge \lambda q_1$ and $r\leq q_1$, we have that $rv_1\neq 0.$ Therefore, by replacing $Q_1$ by $Q_1r$, $\theta_1(\cdot)$ by $\theta_1(\cdot)r$, $q$ by $r$ and $v_1$ by the partial isometry from the polar decomposition of $rv_1$, the relations \eqref{s1}-\eqref{s12} are still satisfied. This proves the claim. \hfill$\square$ Let $i\in\{1,2\}$ and denote by $i+1$ the element in the set $\{1,2\}\setminus \{i\}.$ Since $P_i\prec_M Q_i$, by passing to relative commutants, we get that $A^{\Gamma_{i}}q_i\prec_ M P_{i+1}$. Then there exist projections $k_{i}\in A^{\Gamma_{i}}q_i$, $r_{i+1}\in P_{i+1}$, a $*$-homomorphism $\psi_i: A^{\Gamma_{i}}k_{i}\to r_{i+1}P_{i+1}r_{i+1}$ and a non-zero partial isometry $w_i\in r_{i+1}Mk_i$ such that \begin{equation}\label{c1} \psi_i(x)w_i=w_ix, \text{ for all } x\in A^{\Gamma_{i}}k_{i}. \end{equation} By restricting the projection $w_iw_i^*$ if necessary we can assume that \begin{equation}\label{c2} E_{P_{i+1}}(w_iw_i^*)r_{i+1}\ge \beta_{i+1}r_{i+1}, \text{ for some positive number } \beta_{i+1}. \end{equation} Denote $R_{i+1}=\psi_i(A^{\Gamma_{i}}k_{i}).$ {\bf Claim 2.} $ R_1\prec_M^s A^{\Gamma_{2}}k_2 \text{ and } R_2\prec_M^s A^{\Gamma_{1}}k_1. $ {\it Proof.} We prove only the second intertwining, since the first one follows in a similar way. To the end, take a non-zero projection $b\in\mathcal N_{r_2Mr_2}(R_2)'\cap r_2Mr_2\subset R_2'\cap r_2P_2r_2 $ and define the $*$-homomorphism $\psi_b: A^{\Gamma_{1}}k_1\to R_2b$ by $\psi_b(x)=\psi_1(x)b, $ for all $x\in A^{\Gamma_{1}}k_1$. Note that $\psi_b(x)bw_1=bw_1x,$ for all $x\in A^{\Gamma_{1}}k_1$. We show that $bw_1$ is non-zero. If this is not the case, then $0=E_{P_2}(bw_1w_1^*)=bE_{P_2}(w_1w_1^*)$. Since $b\leq r_2$, relation \eqref{c2} implies that $b=0,$ false. Hence, $bw_1\neq 0.$ Thus, by replacing $bw_1$ by its partial isometry from its polar decomposition and by noticing that $\psi_b$ is a $*$-isomorphism, we get that $R_2b\prec_M A^{\Gamma_{1}}k_1.$ We use \cite[Lemma 2.4(2)]{DHI16} to conclude that $R_2\prec_M^s A^{\Gamma_{1}}k_1.$ \hfill$\square$ We continue with the following: {\bf Claim 3.} $Q_1\vee A^{\Gamma_{1}}q_1\prec_M P_1\bar\otimes R_2$ \text{ and } $Q_2\vee A^{\Gamma_{2}}q_2\prec_M R_1\bar\otimes P_2.$ {\it Proof.} Due to symmetry we only need to show the first intertwining. First we construct a $*$-isomorphism $\theta: Q_1k_1\to p_1P_1p_1$ and a non-zero partial isometry $v\in p_1Mk_1$ such that $$ \theta(x)v=vx, \text{ for all } x\in Q_1k_1. $$ Note that $\theta_1^{-1}:Q_1\to p_1P_1p_1$ is a $*$-isomorphism satisfying $\theta_1^{-1}(x)v_1^*=v_1^*x,$ for all $x\in Q_1.$ Since $Q_1$ is a factor and $k_1\in Q_1'\cap q_1(A\rtimes\Gamma_{1})q_1$, we can define the $*$-homomorphism $\theta: Q_1k_1\to p_1P_1p_1$, by letting $\theta(xk_1)=\theta_1^{-1}(x),$ for all $x\in Q_1.$ Note that $\theta(xk_1)v^*_1k_1=v^*_1k_1xk_1,$ for all $x\in Q_1.$ Let $v$ be the partial isometry obtained from the polar decomposition of $v_1^*k_1$. Note that $v\neq 0$ by using \eqref{s12}. Therefore, $\theta(x)v=vx$, for all $x\in Q_1k_1.$ Notice that $vv^*\in (P_1'\cap M)p_1$ and $P_1'\cap M= P_2$, so there exists a non-zero projection $\tilde p_2\in P_2$ such that $vv^*=p_1\otimes \tilde p_2$. Without loss of generality, we can assume that \begin{equation}\label{s4} \tilde p_2 \leq r_2 \text{ or } r_2 \leq \tilde p_2. \end{equation} Indeed, since $P_2$ is a II$_1$ factor, there exists a unitary $u\in\mathcal U(P_2)$ such that $\tilde p_2\leq u r_2 u^*$ or $ur_2u^*\leq \tilde p_2.$ By replacing $\psi_1(\cdot)$ by $u\psi_1(\cdot) u^*$, $r_2$ by $ur_2u^*$, $R_2$ by $uR_2u^*$ and $w_1$ by $uw_1$, relations \eqref{c1} and \eqref{c2} still hold. Therefore we can assume \eqref{s4} to be true. We suppose by contradiction that $Q_1k_1\vee A^{\Gamma_{1}}k_1\nprec_M P_1\bar\otimes R_2.$ Then there exist two sequences of unitaries $(u_n)_n\subset\mathcal U(Q_1)$ and $(v_n)_n\subset \mathcal U(A^{\Gamma_{1}})$ such that $$ \|E_{P_1\bar\otimes R_2}(x(u_nk_1)(v_nk_1)y)\|_2\to 0, \text{ for all } x,y\in M. $$ By taking $x=v$, we get that \begin{equation}\label{e} \|E_{P_1\bar\otimes R_2}(\theta(u_nk_1)v v_ny)\|_2= \|E_{P_1\bar\otimes R_2}(v v_ny)\|_2\to 0, \text{ for all } y\in M \end{equation} We now argue that \begin{equation}\label{extra} E_{P_1\bar\otimes R_2}(vv^* w_1w_1^*)=0. \end{equation} Take $y=au_g^*w_1^*$ in \eqref{e}, for some $a\in A$ and $g\in\Gamma$. Since $A^{\Gamma_{1}}$ is normalized by $u_g$, we obtain $$ \begin{array}{rcl} \|E_{P_1\bar\otimes R_2}(v v_nau_g^*w_1^*)\|_2&=& \|E_{P_1\bar\otimes R_2}(v au_g^*(\sigma_g(v_n)w_1^*))\|_2\\ &=&\|E_{P_1\bar\otimes R_2}(v au_g^*w_1^*\psi_1(\sigma_g(v_n)k_1))\|_2\\ &=&\|E_{P_1\bar\otimes R_2}(v au_g^*w_1^*)\|_2\\ \end{array} $$ tends to $0$ as $n\to\infty$. This implies that $\|E_{P_1\bar\otimes R_2}(v au_g^*w_1^*)\|_2=0$, for all $a\in A$ and $g\in\Gamma$. This proves \eqref{extra}, which gives us that $p_1E_{P_1\bar\otimes R_2}(\tilde p_2w_1w_1^*)=0$. We obtain $p_1\otimes E_{R_2}(\tilde p_2w_1w_1^*)=0$ and therefore, $p_1\otimes E_ {R_2}(\tilde p_2E_{P_2}(w_1w_1^*))=0$, since $\tilde p_2\in P_2$. Hence, $\tau(r_2\tilde p_2E_{P_2}(w_1w_1^*))=0$. Using that $E_{P_2}(w_1w_1^*)r_2\ge \beta_2r_2$, we get that $\tilde p_2E_{P_2}(w_1w_1^*)r_2\tilde p_2\ge \beta_2r_2\tilde p_2$. Altogether it implies that $\beta_2\tau(\tilde p_2r_2)=0$, which contradicts \eqref{s4} since $\beta_2$ is non-zero. Thus, $Q_1k_1\vee A^{\Gamma_{1}}k_1\prec_M P_1\bar\otimes R_2$, ending this way the proof of the claim. \hfill$\square$ Finally, we obtain the following: {\bf Claim 4.} $A\prec_M^s A^{\Gamma_{1}}\vee A^{\Gamma_{2}}.$ {\it Proof.} Denote $D_i=R_i\oplus \mathbb C (1-r_i),$ for $i\in\{1,2\}.$ Note that Claim 1 together with relation \eqref{s1} show that $Q_i\vee A^{\Gamma_i}q_1\subset q_i(A\rtimes\Gamma_i)q_i$ has finite index, for $i\in\{1,2\}.$ Combining Claim 3 and Lemma \ref{L: center}(2), we obtain that $A\prec_M P_1\bar\otimes D_2$ and $A\prec_M D_1\bar\otimes P_2.$ Note that \cite[Lemma 2.4(2)]{DHI16} together with Proposition \ref{L: PV} imply that $A\prec^s_M D_1\bar\otimes D_2.$ Notice also that Claim 2 shows that $D_i\prec_M^s A^{\Gamma_{1}}\vee A^{\Gamma_{2}}$, for any $i\in\{1,2\}.$ Since $A^{\Gamma_{1}}\vee A^{\Gamma_{2}}$ is regular and the algebra $D_2$ is abelian, we can apply Lemma \ref{L: joint} and obtain that $D_1\bar\otimes D_2\prec_M^s A^{\Gamma_{1}}\vee A^{\Gamma_{2}}.$ By applying \cite[Lemma 3.7]{Va08}, we deduce that $A\prec_M^s A^{\Gamma_{1}}\vee A^{\Gamma_{2}}$. \hfill$\square$ The moreover part follows from the first part of Theorem \ref{Th:split}. In particular, we can represent $A^{\Gamma_2}=L^\infty(X_1,\mu_1)$ and $A^{\Gamma_2}=L^\infty(X_2,\mu)$ for some standard probability spaces $(X_1,\mu_1)$ and $(X_2,\mu_2)$, respectively. Hence, for any $i\in\{1,2\}$ there exists a pmp action $\Gamma_{i}\curvearrowright (X_i,\mu_i)$ such that $\Gamma\curvearrowright X$ is isomorphic to $\Gamma_{1}\times\Gamma_{2}\curvearrowright X_1\times X_2. $ \hfill$\blacksquare$ \section{Proofs of the main results} We start this section by presenting another tool needed for the proof of Theorem \ref{A} and we will conclude by proving the main results mentioned in the introduction. \begin{proposition}\label{start} Let $\Gamma=\Gamma_1\times\dots \times\Gamma_n$ be a product of $n\ge 2$ groups that belong to the class $\mathcal C$. Let $\Gamma\overset{}{\curvearrowright} (X,\mu)$ be a free ergodic pmp action and denote $M=L^\infty (X)\rtimes\Gamma.$\\ Suppose that $M= P_1\bar\otimes P_2 $, for some II$_1$ factors $P_1$ and $P_2.$ Then there exists a partition $T_1\sqcup T_2=\{1,\dots ,n \}$ into non-empty sets such that $P_i\prec^s_M L^\infty(X)\rtimes\Gamma_{T_i}$, for $i\in\{1,2\}.$ \end{proposition} {\it Proof.} Denote $A=L^\infty(X).$ Let $T_i$ be the minimal subset of $\{1,\dots,n\}$ with the property that $P_i \prec^s_M A\rtimes\Gamma_{T_i}$ for all $i\in \{1,2\}.$ Notice that $T_i$ is non-empty since any corner of a II$_1$ factor is non-abelian. We want to show that $T_1 \sqcup T_2=\{1,\dots,n\}.$ Note that since $P_i$ is regular and $M$ is a factor, \cite[Lemma 2.4(2)]{DHI16} implies that $P_i\prec_M A\rtimes{\Gamma_S}$ if and only if $P_i\prec_M^s A\rtimes {\Gamma_S}$ for any subset $S\subset \{1,\dots,n\}$. First we notice that $\{1,\dots,n\}=T_1\cup T_2$. Indeed, by applying \cite[Lemma 2.3]{BV12} we get that $M\prec A\rtimes \Gamma_{T_1\cup T_2}$. This shows that $\{1,\dots,n\}=T_1\cup T_2$, since $\Gamma_t$ is an infinite group, for any $t\in\{1,\dots,n\}.$ We will finish the proof by proving the following claim. {\bf Claim.} $T_1\cap T_2$ is empty. {\it Proof.} For any $i\in\{1,2\},$ \cite[Proposition 2.4]{CKP14} implies that there exist non-zero projections $p_i\in P_i$, $q_i\in A\rtimes\Gamma_{T_i}$, a subalgebra $Q_i\subset q_i(A\rtimes\Gamma_{T_i})q_i$, a partial isometry $v_i\in q_iMp_i$ and an onto $*$- $\theta_i: p_iP_ip_i\to Q_i$ such that $\theta_i(x)v_i=v_ix, \text{ for all } x\in p_iP_ip_i, $ and $ Q_i\vee (Q_i'\cap q_i(A\rtimes\Gamma_{T_i})q_i)\subset q_i(A\rtimes\Gamma_{T_i})q_i \text{ has finite index. } $ Moreover, the support projection of $E_{A\rtimes\Gamma_{T_i}}(v_iv_i^*)$ can be assumed to equal $q_i.$ Denote $S_i:=Q_i'\cap q_i(A\rtimes\Gamma_{T_i})q_i$. Assume by contradiction that there exist a non-zero projection $z\in\mathcal N_{q_1(A\rtimes\Gamma_{T_1})q_1}(S_1)'\cap q_1(A\rtimes\Gamma_{T_1})q_1$ and an index $j\in T_1$ such that $S_1z_0$ is non-amenable relative to $A\rtimes \Gamma_{T_{1}\setminus\{j\}},$ for all $z_0\in (Q_1\vee S_1)'\cap z(A\rtimes\Gamma_{T_1})z$. Note that $z,z_0\in S_1$ and that the inclusion $z_0(Q_1\vee S_1)z_0\subset z_0(A\rtimes\Gamma_{T_1})z_0$ has finite index by Lemma \ref{PP}. Therefore, Corollary \ref{all} implies that $Q_1z_0\prec_{A\rtimes{{\Gamma_{T_1}}}} A\rtimes \Gamma_{T_{1}\setminus\{j\}}.$ By applying \cite[Lemma 2.4(3)]{DHI16}, we get that $Q_1z\prec^s_{A\rtimes{{\Gamma_{T_1}}}} A\rtimes \Gamma_{T_{1}\setminus\{j\}}.$ It is easy to see that the moreover part of the previous paragraph shows that $P_1\prec_M Q_1z.$ Hence, by applying \cite[Lemma 3.7]{Va08} we get that $P_1\prec_M A\rtimes\Gamma_{T_{1}\setminus\{j\}},$ which contradicts the minimality of $T_1.$ Therefore, for any $j\in T_1$ and $z\in\mathcal N_{q_1(A\rtimes\Gamma_{T_1})q_1}(S_1)'\cap q_1(A\rtimes\Gamma_{T_1})q_1$, there exists a non-zero projection $z_0\in (Q_1\vee S_1)'\cap z(A\rtimes\Gamma_{T_1})z$ such that $S_1z_0$ is amenable relative to $A\rtimes \Gamma_{T_{1}\setminus\{j\}}.$ \cite[Lemma 2.6(2)]{DHI16} shows that we can assume $z_0\in \mathcal N_{z(A\rtimes\Gamma_{T_1})z}(S_1z)'\cap z(A\rtimes\Gamma_{T_1})z$. By applying \cite[Proposition 2.7]{PV11} finitely many times, we obtain that there exists a non-zero projection $z_1\in\mathcal N_{q_1(A\rtimes\Gamma_{T_1})q_1}(S_1)'\cap q_1(A\rtimes\Gamma_{T_1})q_1$ such that $S_1z_1$ is amenable. In a similar way, there exists a non-zero projection $z_2\in\mathcal N_{q_2(A\rtimes\Gamma_{T_2})q_2}(S_2)'\cap q_2(A\rtimes\Gamma_{T_2})q_2\subset Q_2$ such that $S_2z_2$ is amenable. Since $P_1\prec_M Q_1z_1$ and $P_2\prec_M Q_2z_2$, \cite[Lemma 3.5]{Va08} implies that $S_1z_1\prec_M P_2$ and $S_2z_2\prec_M P_1.$ By proceeding as in the proof of Theorem \ref{AA} (Claims 3 and 4), we obtain that there exist amenable subalgebras $D_1\subset P_1$ and $D_2\subset P_2$ such that $L(\Gamma_{T_1}\cap \Gamma_{T_2})\prec_M D_1\bar\otimes D_2.$ This implies that $\Gamma_{T_1}\cap \Gamma_{T_2}$ is amenable, hence $T_1\cap T_2$ is empty. \hfill$\blacksquare$ \begin{remark} We provide the following shorter argument for proving Proposition \ref{start} if the groups $\Gamma_i$'s are only weakly amenable, bi-exact groups or free products. We only need to prove the claim. First, for any $t\in \{1,\dots,n\}$, denote $\hat t=\{1,\dots,n\}\setminus\{t\}$. Suppose by contradiction that there exists an element $t\in T_1\cap T_2.$ Then $P_1\nprec_M A\rtimes \Gamma_{T_1\setminus\{t\}}$ based on the minimality of $T_1$. This is equivalent to $P_1\nprec_M A\rtimes \Gamma_{\hat t}$ using \cite[Lemma 2.8(2)]{DHI16}. If $\Gamma_t$ is a free product, by applying Lemma \ref{L: amalgam}, we must have $P_2\prec_M A\rtimes\Gamma_{\hat t}$. Using \cite[Lemma 2.8(2)]{DHI16} this shows that $P_2\prec_M A\rtimes\Gamma_{T_2\setminus\{t\}}$, which contradicts the minimality of $T_2$. On the other hand, if $\Gamma_t\in \mathcal C_{rss}$, by applying Lemma \ref{L: rss} we obtain that $P_2$ is amenable relative to $A\rtimes\Gamma_{\hat t}$, which implies that $P_2\prec A \rtimes\Gamma_{T_2\setminus \{t\}}$ or $M$ is amenable relative to $A\rtimes\Gamma_{\hat t}.$ The former contradicts the minimality of $T_2$, while the last one contradicts the non-amenability of $\Gamma_t$ by \cite[Proposition 2.4]{OP07}. \end{remark} The proof of Theorem \ref{A} follows by combining Proposition \ref{start} with Theorem \ref{AA}. Corollary \ref{A2} is obtained directly from Theorem \ref{A}. We continue now with the proofs of Corollary \ref{C} and Theorem \ref{UPFgeneral}. {\it Proof of Corollary \ref{C}.} By applying Theorem \ref{A} finitely many times, we can find an integer $1\leq k \leq n$, a partition $S_1\sqcup ... \sqcup S_k =\{1,...,n\}$ and pmp actions $\Gamma_{S_{i}}\curvearrowright (X_i,\mu_i)$ such that $\Gamma\curvearrowright X \text{ is isomorphic to } \Gamma_{S_1}\times ...\times \Gamma_{S_k}\curvearrowright X_1\times ...\times X_k, $ and $M_i=L^\infty(X_i)\rtimes\Gamma_{S_i}$ is prime for all $i\in\{1,...,k\}$. Note that the following holds: {\bf Claim.} If $\Gamma\curvearrowright X$ is isomorphic to $\Gamma_{T_1}\times\Gamma_{T_2}\curvearrowright Y_1\times Y_2$ for a partition $T_1\sqcup T_2=\{1,...,n\}$, then there exists a partition $J_1\sqcup J_2=\{1,...,k\}$ such that $T_1=\sqcup_{i\in J_1} S_i$ and $T_2=\sqcup_{i\in J_2} S_i.$ {\it Proof.} First note that it is enough to show that if $S_i\cap T_j\neq \emptyset $, for some $i\in\{1,...,k\}$ and $j\in\{1,2\}$, then $S_i\subset T_j.$ To this end, take $i$ and $j$ as before. The assumption implies that the actions $\Gamma_{S_i}\curvearrowright X$ and $\Gamma_{S_i\cap T_1}\times\Gamma_{S_i\cap T_2}\curvearrowright Y_1\times Y_2$ are isomorphic. This shows that $$ M_i=L^\infty(X_i)\rtimes\Gamma_{S_i}=L^\infty(X)^{\Gamma_{S_i^c\cap T_2}}\rtimes\Gamma_{S_i\cap T_1}\bar\otimes L^\infty(X)^{\Gamma_{S_i^c\cap T_1}}\rtimes\Gamma_{S_i\cap T_2}, $$ where we denote $S_i^c:=\{1,...,n\}\setminus S_i$, for any $1\leq i\leq k.$ Since the algebra $M_i$ is prime, we must have $S_i\cap T_j=S_i,$ which implies that $S_i\subset T_j.$ \hfill$\square$ The Claim shows that the partition $S_1\sqcup ... \sqcup S_k =\{1,...,n\}$ is unique up to a permutation of the sets. We continue now by proving the moreover part. (1) Let $M=P_1\bar\otimes P_2$ for some II$_1$ factors $P_1$ and $P_2$. If we apply Theorem \ref{A} we obtain a partition $T_1\sqcup T_2=\{1,...,n\}$, a decomposition $M=P_1^t\bar\otimes P_2^{1/t}$ for some positive $t$, and a unitary $u\in M$ such that $$ P_1^{t}=u(L^\infty(X)^{\Gamma_{T_2}}\rtimes\Gamma_{T_1})u^* \text{ and } P_2^{1/t}=u(L^\infty(X)^{\Gamma_{T_1}}\rtimes\Gamma_{T_2})u^*. $$ The Claim shows that there exists a partition $J_1\sqcup J_2=\{1,...,k\}$ such that $T_1=\sqcup_{i\in J_1} S_i$ and $T_2=\sqcup_{i\in J_2} S_i.$ This exactly implies the conclusion. (2) Assume that $M=P_1\bar\otimes \dots \bar\otimes P_m$, for some $m\ge k$. Part (1) combined with induction implies that $m\leq k$ and that there exist a partition $J_1\sqcup\dots \sqcup J_m=\{1,\dots,k\}$, a decomposition $M=P_1^{t_1}\bar\otimes\dots\bar\otimes P_m^{t_m}$ with $t_1\dots t_m=1$, and a unitary $u\in M$ such that $P_s^{t_j}=u(\bar\otimes_{j\in J_s} M_j)u^*$, for any $s\in\{1,...,m\}.$ Therefore $m=k$ and the conclusion holds. (3) For proving this last part, we proceed as in (2). Since each $P_j$ is prime, $J_s$ has only one element. This shows once again that $m=k$ and the conclusion holds. \hfill$\blacksquare$ {\it Proof of Theorem \ref{UPFgeneral}.} (1) Denote $X=X_1\times ...\times X_k$. Let $M=P_1\bar\otimes P_2$ for some II$_1$ factors $P_1$ and $P_2$. By applying Proposition \ref{start} and Theorem \ref{Th:split}(2), we obtain that there exists a partition $I_1\sqcup I_2=\{1,...,n\}$ such that $P_1$ is stably isomorphic to $L^\infty(X)^{\Gamma_{I_2}}\rtimes\Gamma_{I_1}=\bar\otimes_{i\in I_1}M_i$ and $P_2$ is stably isomorphic to $L^\infty(X)^{\Gamma_{I_1}}\rtimes\Gamma_{I_2}=\bar\otimes_{i\in I_2}M_i.$ (2) \& (3) Assume that $M=P_1\bar\otimes\dots\bar\otimes P_m$ for some integer $m$ and II$_1$ factors $P_1,\dots ,P_m.$ Part (1) combined with induction implies that $m\leq k$ and that there exists a partition $J_1\sqcup\dots \sqcup J_m=\{1,\dots,k\}$ such that $P_s$ is stably isomorphic to $\bar\otimes_{j\in J_s}M_j$, for any $1\leq s\leq m$. If $m\ge k$ or if each $P_s$ is prime, we obtain that $m=k$ and each $J_s$ has only one element. For proving the moreover part, assume that $\Gamma_i\curvearrowright (X_i,\mu_i)$ is strongly ergodic for any $1\leq i\leq k.$ Note that \cite[Theorem C]{CSU13} combined with \cite[Examples 1.4,1.5]{CSU13} imply that the class $\mathcal C$ is contained in the class of non-inner amenable groups. In combination with \cite{Ch82}, we obtain that $M$ does not have property Gamma. Let $M=P_1\bar\otimes P_2$ for some II$_1$ factors $P_1$ and $P_2$. We will show only part (1) from the moreover part since part (2) and (3) can be deduced as in Corollary \ref{C}. The previous paragraph implies that $P_1$ and $P_2$ does not have property Gamma. As before, we can apply Proposition \ref{start} and obtain a partition $I_1\sqcup I_2=\{1,...,n\}$ such that $P_j\prec_M L^\infty(X)\rtimes I_j$, for any $j\in\{1,2\}$. Applying \cite[Proposition 6.3]{Ho15}, we obtain that $P_j\prec_M \bar\otimes_{i\in I_j} M_i$, for any $j\in\{1,2\}.$ By proceeding as in the proof of Theorem \ref{AA}, we obtain that there exist a unitary $u\in M$ and a decomposition $M=P_1^t\bar\otimes P_2^{1/t}$, for some $t>0$, such that $P_1^t= u(\bar\otimes_{i\in I_1} M_i)u^*$ and $P_2^{1/t}= u(\bar\otimes_{i\in I_2} M_i)u^*$. This ends the proof. \hfill$\blacksquare$
{ "timestamp": "2019-10-24T02:14:36", "yymm": "1904", "arxiv_id": "1904.06637", "language": "en", "url": "https://arxiv.org/abs/1904.06637" }
\section{Introduction} \IEEEPARstart{P}{hotometric} bundle adjustment has proven to be an effective method for estimating scene geometry and camera motion in Visual Odometry (VO) \cite{Engel2016a}. As a direct optimization, PBA minimizes the photometric error of map point observations over a local sliding-window of active keyframes. The number of active keyframes is limited to avoid large computations. Points are sampled across image pixels with locally high gradient module, such as edges and weak intensity variations. They are associated to only one keyframe where they are initialized. In the rest of keyframes, there is not an explicit and fixed data association, because the PBA recomputes the correspondences as a part of the optimization. Thus, direct methods do not rely on the repeatability of selected points and are able to operate in scenes with low texture but with contours. Current PBA based methods are only able to do VO, which builds a temporary map to precisely estimate the camera pose. They use a sliding-window that selects close in time active keyframes, marginalizing map points that leave the field of view. This strategy reduces the computation complexity by removing old cameras and points while maintaining the system consistent to unobservable degrees of freedom, i.e. absolute pose and scale. Hence, if the camera revisits already mapped areas, the PBA cannot reuse marginalized map points and it is forced to duplicate them. This is a severe limitation: the system cannot benefit from the highly informative reobservations of map points, and this causes motion drift and structure inconsistencies. In contrast, VSLAM methods build a persistent map of the scene, and continuously process map point reobservations. Instead of using a sliding-window and marginalization, they retain keyframes and map points with a fixed location in the model and select the active keyframes and map points according to covisibility criteria, i.e. they observe several map points in common. This results in a network of keyframes where the connectivity is based on whether they observe the same scene region even if they are far in time. The fixation strategy maintains the system consistent to unobservable degrees of freedom (gauge freedom) and it enables the reuse of map points. Thus, VSLAM approaches can extract the rich information of map point reobservations reducing the drift in the estimates. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{./images/vduplication} \caption{Estimated map by DSM with (bottom) and without (top) point reobservations in the V2\_01\_easy sequence of the EuRoC MAV dataset. DSM can produce consistent maps without duplicates.} \label{fig:example_V21} \end{figure} Transforming PBA based direct VO systems into VSLAM is not straightforward because there are several challenges to solve. First, when the camera revisits already mapped areas, the system has to select active keyframes that include map point reobservations. They are difficult to obtain because there are not point correspondences between keyframes. At the same time, we have to guarantee accurate map expansion during exploration. We propose to select the active keyframes according to a combination of temporal and covisibility criteria. In this way, the PBA includes in the optimization keyframes that observe the active scene region with high parallax even if they are far in time. Second, the PBA includes map points and keyframes distant in time and, hence, affected by the estimation drift. Normally, the photometric convergence radius is around 1-2 pixels due to image linearization and, thus, a standard PBA cannot compensate the drift. We propose a multiscale PBA optimization to handle successfully these convergence difficulties. Third, we have to ensure the robustness of the PBA against spurious observations. They mainly arise from the widely separated active keyframes -- in contrast to the close keyframes of VO -- which render occlusions and scene reflections that violate the photo-consistency assumption. We incorporate a robust influence function based on the t-distribution into the PBA to handle the adverse effect of these observations. We present a new direct VSLAM system, DSM (Direct Sparse Mapping). Up to our knowledge, this is the first fully direct monocular VSLAM method that is able not only to detect point reobservations but also to extract the rich information they provide (see Fig.\,\ref{fig:example_V21}). In summary, we make the following contributions: \begin{itemize} \item A persistent map which allows to reuse existing map information directly with the photometric formulation. \item The Local Map Covisibility Window (LMCW) criteria to select the active keyframes that observe the same scene region, even if they are not close in time, and the map point reobservations. \item We show that the PBA needs a coarse-to-fine scheme to convergence. This exploits the rich geometrical information provided by point reobservations from keyframes rendering high parallax. \item We show that a t-distribution based robust influence function together with a pixel-wise outlier management strategy increases the PBA consistency against outliers derived from the activation of distant keyframes. \item An experimental validation of DSM in the EuRoC dataset \cite{Burri2015}. We report quantitative results of the camera trajectory and, for the first time, of the reconstructed map. We obtain the most accurate results among direct monocular methods so far. \item We make our implementation publicly available\footnote{https://github.com/jzubizarreta/dsm}. \end{itemize} \section{Related Work} The first real-time monocular VSLAM methods were indirect approaches, using FAST and Harris corners associated across images in the form of 2D fixed correspondences. The 3D geometry was estimated minimizing the reprojection error. They relied on the repeatability of the corner detectors and required rich visual texture. Thank to the feature descriptors they associate distant images. Davison et al. present MonoSLAM \cite{Davison2007}, which recovers the scene geometry in an EKF-based framework, later extended in \cite{Civera2008} to include a parametrization in inverse depth. Klein and Murray in PTAM \cite{Klein2007} propose for the first time to parallelize the tracking and mapping tasks, demonstrating the viability of using a BA scheme to maintain a persistent map in small workspaces. Later, \cite{Strasdat2011} proposes a double window optimization to extend the potential of feature-based VSLAM to long-term applications. It combines a local BA with a global pose-graph optimization using covisibility constraints based on point matches. Following these works, ORB-SLAM \cite{Mur-Artal2015} presents which is the reference solution among indirect VSLAM approaches. Up to date, it is the most accurate monocular VSLAM method in many scenarios. The key aspect of its precision comes from the management of map point reobservations in the BA using an appearance based covisibility graph. Similarly, DSM transfers the main ideas of indirect VSLAM techniques, to direct systems significantly increasing the accuracy of their estimates. As a direct approach, DSM does not compute explicit point matches and, thus, cannot build an appearance based covisibility graph. Instead, DSM relies on geometric constraints to build covisibility connections between far in time keyframes. In addition, it works with a smaller window of covisible keyframes than ORB-SLAM to control the computational limitations. Recently, VO approaches have shown impressive performance. SVO \cite{Forster2014} proposes an hybrid approach to build a semi-direct VO system. It uses direct techniques to track and triangulate points but ultimately optimizes the reprojection error of those points in the background. OKVIS \cite{Leutenegger2015} presents a feature-based visual-inertial odometry system that continuously optimizes the geometry of a local map marginalizing the rest. Recently, Engel et al. \cite{Engel2016a} made a breakthrough with their DSO, the first fully direct VO approach that jointly optimizes motion and structure formulating a PBA and including a photometric calibration into the model. Inspired by OKVIS, DSO performs the optimization over a sliding-window, where old keyframes as well as points that leave the field of view of the camera are marginalized. It has shown impressive odometry performance and it is the reference among direct VO methods. However, as a pure VO approach DSO cannot reuse map points once they are marginalized which causes camera localization drift and map inconsistencies. DSM uses the same photometric model of DSO and goes one step further to build the first fully direct VSLAM solution with a persistent map. Many VO systems have been extended to cope with loop closures. Most propose to include a feature-based Bag of Binary Words (DBoW) to detect loop closures and estimate pose constraints between keyframes, following \cite{Galvez-Lopez2012}. Then, a pose-graph optimization finds a correction for the keyframe trayectory. VINS-mono \cite{Qin2018} uses a similar front-end to OKVIS but includes additional BRIEF features to perform loop closure. LSD-SLAM \cite{Engel2014} was the first direct monocular VO for large-scale environments. The method recovers semi-dense depth maps using small-baseline stereo comparisons and reduces accumulated drift with a pose-graph optimization. Loop closures are detected using FAB-MAP \cite{Cummins2008}, an appearance loop detection algorithm, which uses different features to those of the direct odometry. LDSO \cite{Gao2018} extends DSO with a conventional ORB-DBoW to detect loop closures and reduce the trajectory drift by pose-graph optimization. All these methods have the next drawbacks: (1) they uses a different objective function and points to those of the odometry; (2) loop closure detection relies on feature repeatability, missing many corrections; (3) the error correction is distributed equally over keyframes, which may not be the optimal solution; (4) although the trajectory is spatially corrected, existing information from map points is not reused and, thus, ignored during the optimization. In contrast, full VSLAM systems like ORB-SLAM and DSM reuse the map information thanks to its persistent map. The reobservations are processed with their standard BA (either geometric or photometric), resulting in more accurate estimates. Thanks to the improvement in accuracy the need of loop closure detection and correction is postponed to trajectories longer than in their VO counterparts. Moreover, DVO \cite{Kerl2013} proposes a probabilistic formulation for direct image alignment. Inspired by \cite{Lange1989}, they show the robustness of using a t-distribution to manage the influence of noise and outliers. \cite{Babu2016} demonstrates that the t-distribution represents well photometric errors while not geometric errors. We incorporate these ideas into the sparse photometric model together with a novel outlier management strategy. In this way, we make the non-linear PBA optimization robust to spurious point observations. They normally appear as a result of widely separated active keyframes and lack of explicit point matches. \section{Direct Mapping} The proposed VSLAM system consists of a tracking front-end (Sec. \ref{sec:frontend}) and an optimization back-end (Sec. \ref{sec:PBA}). The front-end tracks frames and points, and also provides the coarse initialization for the PBA. The back-end determines which keyframes form the local window (Sec. \ref{sec:lmcw}) and jointly optimizes all the active keyframes and map point parameters. Similarly to most VSLAM systems \cite{Klein2007,Mur-Artal2015,Engel2016a}, the front-end and the back-end run in two parallel threads: \begin{enumerate} \item The tracking thread obtains the camera pose at frame rate. It also decides when the map needs to grow by marking some of the tracked frames as keyframes. \item The mapping thread processes all new frames to track points from active keyframes. Besides, if the new frame is marked as a keyframe, the local window is recalculated, new points are activated and the PBA optimizes motion (keyframes) and structure (points) together using active keyframes. Finally, it maintains the model globally consistent, i.e. removes outliers, detects occlusions and avoids point duplications (Sec. \ref{sec:outlier}). \end{enumerate} The persistent map is composed of keyfames that are activated or deactivated according to covisibility criteria with the latest keyframe. The absolute pose of a keyframe $i$ is represented by the transformation matrix $\mathbf{T}_{i} \in SE(3)$. For each keyframe, we select as candidate points those with a locally high gradient module and spread over the image. Each map point $\mathbf{p}$ is created in a keyframe (Sec. \ref{sec:frontend}) and its pose is coded as its inverse depth $\rho = p_z^{-1}$. Thus, for each keyframe we store the raw image and the associated map points. We assume all images to be undistorted. We use the pinhole model to project a point from 3D space to the image plane, $\mathbf{u} = \pi(\mathbf{p}) = \mathbf{K} (p_x/p_z, p_y/p_z, 1)^T$, where $\mathbf{K}$ is the camera matrix. Its inverse is also defined when the inverse depth of the point is known $\mathbf{p} = \pi^{-1}(\mathbf{u}, \rho) = \rho^{-1} \mathbf{K^{-1}} (u_x, u_y, 1)^T$. The LMCW (Sec. \ref{sec:lmcw}) selects which keyframes are active and form the local window. Once a keyframe is active, all its parameters (pose and affine light model) and associated points (inverse depth) are optimized by the PBA. Otherwise, they remain fixed to maintain the system consistent to unobservable degrees of freedom. During optimization, we will use $\boldsymbol\xi \in SE(3)^n \times \mathbb{R}^{2n+m} $ to represent the set of optimized parameters ($n$ keyframes and $m$ points) and $\delta\boldsymbol\xi \in se(3)^n \times \mathbb{R}^{2n+m}$ to denote the increments. Moreover, we use the left-compositional convention for all optimization increments, i.e. $\boldsymbol\xi^{(t+1)} = \delta\boldsymbol\xi^{(t)} \boxplus \boldsymbol\xi^{(t)}$. This direct VSLAM framework enables to build a persistent map and reuse existing map information from old keyframes directly in the photometric bundle adjustment. \subsection{Photometric Model} \label{sec:model} The same photometric function, the one proposed in \cite{Engel2016a}, is used in the whole system, i.e. geometry initialization (camera and point tracking), local windowed PBA and map reuse. For each point $\mathbf{p}$, we evaluate the sum of square intensity differences $r_k$ over a small patch $\mathcal{N}_p$ around it between the host $I_{i}$ and target $I_j$ images. We include an affine brightness transfer model to handle the camera automatic gain control and changes in scene illumination. The observation of a point $\mathbf{p}$ in the keyframe $I_j$ is coded by: \begin{equation} E_p = \sum_{\mathbf{u}_k \in \mathcal{N}_p} w_k \bigg((I_{i}[\mathbf{u}_k] - b_{i}) - \frac{e^{a_i}}{e^{a_j}}(I_j[\mathbf{u'}_k] - b_j)\bigg)^2, \label{eq:model} \end{equation} \noindent where $\mathbf{u}_k$ is each of the pixels in the patch; $\mathbf{u'}_k$ is the projection of $\mathbf{u}_k$ in the target frame with its inverse depth $\rho_k$, given by $\mathbf{u'}_k = \pi(\mathbf{T}_{j,i}\cdot \pi^{-1}(\mathbf{u}_k, \rho_k))$ with $\mathbf{T}_{j,i}=\mathbf{T}_{j}^{-1}\mathbf{T}_{i}$; $a_{i}, b_{i}, a_{j}, b_{j}$ the affine brightness functions for each frame; and $w_k = w_{r_k} w_{g_k}$ a combination of the robust influence function $ w_{r_k}$ and a gradient dependent weight $w_{g_k}$: \begin{equation} w_{g_k} = \frac{c^2}{c^2 + \parallel \nabla I\parallel_2^2}, \label{eq:weight} \end{equation} \noindent which works as a heuristic covariance in the Maximum Likelihood (ML) estimation, reducing the influence of high gradient pixels due to noise. To sum up, the photometric cost function (\ref{eq:model}) depends on geometric ($\mathbf{T}_i, \mathbf{T}_j, \rho$) and photometric parameters ($a_i, b_i, a_j, b_j$). \subsection{Photometric Bundle Adjustment (PBA)} \label{sec:PBA} Every time a new keyframe is created, all model parameters are optimized by minimizing the error from Eq. (\ref{eq:model}) over the LMCW of active keyframes $\mathcal{K}$. The total error is given by: \begin{equation} E = \sum_{I_i \in \mathcal{K}} \sum_{\mathbf{p} \in \mathcal{P}_i} \sum_{j \in obs(\mathbf{p})} \sum_{\mathbf{u}_k \in \mathcal{N}_p} w_k r^2_k(\boldsymbol\xi), \label{eq:fullModel} \end{equation} where $\mathcal{P}_i$ is the set of points in $I_i$ and $obs(\mathbf{p})$ the set of observations for $\mathbf{p}$. Note that the LMCW reuses map point observations for which the initial solution is not inside the convergence radius and, thus, the PBA is not able to correct. Hence, we propose to use a coarse-to-fine optimization scheme over all active keyframes. In each level, we iterate until convergence and use the estimated geometry as an initialization for the next level. The same points are used across all levels and each level is treated independently, i.e. neither the influence function nor outlier decisions are propagated across the levels (Sec. \ref{sec:robustPBA}). In this way, we are able to handle larger camera and point increments $\delta\boldsymbol\xi$ with the photometric model. We minimize Eq. (\ref{eq:fullModel}) using the iteratively re-weighted Levenberg-Marquardt algorithm. From an initial estimate $\boldsymbol\xi^{(0)}$, each iteration $t$ computes weights $w_k$ and photometric errors $r_k$ to estimate an increment $\delta\boldsymbol\xi^{(t)}$ by solving for the minimum of a second order approximation of Eq. (\ref{eq:fullModel}), with fixed weights: \begin{equation} \delta\boldsymbol\xi^{(t)} = -\mathbf{H}^{-1}\mathbf{b}, \end{equation} with $\mathbf{H} = \mathbf{J}^T\mathbf{W}\mathbf{J} + \lambda \textnormal{diag}(\mathbf{J}^T\mathbf{W}\mathbf{J})$, $\mathbf{b} = \mathbf{J}^T\mathbf{W}\mathbf{r}$ and $\mathbf{W} \in \mathbb{R}^{m\times m}$ is a diagonal matrix with the weights $w_k$, $\mathbf{r}$ is the error vector and $\mathbf{J} \in \mathbb{R}^{m\times d}$ is the Jacobian of the error vector with respect to a left-composed increment given by: \begin{equation} \mathbf{J}_k = \frac{\partial r_k (\delta\boldsymbol\xi \boxplus \boldsymbol\xi^{(t)})}{\partial \delta\boldsymbol\xi} \biggr\rvert_{\substack{\delta\boldsymbol\xi = 0}}. \end{equation} The PBA is implemented using Ceres optimization library \cite{Agarwal} with analytic derivatives. Image gradients are computed using central pixel differences at integer values. For subpixel intensity and gradient evaluation, bilinear interpolation is applied. We take advantage of the so-called primary structure and use the Schur complement trick to solve the reduced problem \cite{Triggs2000}. The gauge freedoms are controlled fixing all other keyframes that are covisible with the active ones. \section{LMCW: Local Map Covisibility Window} \label{sec:lmcw} This section presents the LMCW and the strategy to select its active keyframes and active map points. It is a combination of temporal and covisible criteria with respect to the latest keyframe being created. The LMCW is composed of two main parts: the temporal and the covisible. Fig. \ref{fig:LMCW_example} shows the LMCW selection strategy. \begin{figure} \centering \includegraphics[width=0.42\textwidth]{./images/LMCW_example} \caption{LMCW example with $N_w=7$ and the latest keyframe being created (red). It is composed of $N_t=4$ temporal (blue) and $N_c=3$ covisible (orange) active keyframes.} \label{fig:LMCW_example} \end{figure} The first part is composed of $N_t$ temporally connected keyframes that form a sliding-window like in \cite{Engel2016a}. This part is critical during exploration because it initializes new points (Sec. \ref{sec:frontend}) and maintains the accuracy in odometry. Whenever a new keyframe is created, we insert it into the temporal part and remove another one. Thus, we maintain fixed size temporal keyframes. The strategy that selects the removed keyframe from the temporal part is summarized as: \begin{enumerate} \item Keep the last two keyframes ($I_1$ and $I_2$) to ensure the odometry accuracy during challenging exploratory motions, such as rotations. It avoids premature fixation of keyframes location, guaranteeing that keyframes are well optimized beforehand. \item The remaining keyframes are evenly distributed in space. We drop the keyframe $I_i$ that maximizes: \begin{equation} s(I_i) = \sqrt{d(I_0, I_i)} \quad \textstyle{\sum}_{j=1}^{N_t} \left(d\left(I_i,I_j\right)\right)^{-1}, \end{equation} where $d(I_i,I_j)$ is the $L_2$ distance between keyframes $I_i$ and $I_j$. This strategy favors observations rendering high parallax into the PBA, which increases the accuracy. \end{enumerate} The second part is composed of $N_c$ covisible keyframes with those in the temporal part. Additionally, we seek to fill the latest keyframe $I_0$ with reobserved map points, favoring map points imaged in depleted areas (image areas where no other map points are imaged). Our strategy to achieve this goal is summarized as: \begin{enumerate} \item Compute the distance map to identify the depleted areas. All the map points from the temporal part are projected into the latest keyframe, then the distance map registers, for every pixel, the $L_2$ distance to its closest map point projection. \item Select a keyframe, among the list of old keyframes, that maximizes the number of projected points in the depleted areas using the distance map. We discard points that form a viewing angle above a threshold to detect and remove potential occluded points as early as possible. \item Update the distance map to identify the new depleted areas. \item Iterate from (2) until $N_c$ covisible keyframes are selected or no more suitable keyframes are found. \end{enumerate} The covisible part incorporates already mapped areas in the LMCW before activating new map points. The proposed strategy avoids map point duplications ensuring the map consistency. The values of $N_t$ and $N_c$ are tuned experimentally in Sec. \ref{sec:results}. \section{Robust Non-linear PBA} \label{sec:robustPBA} The LMCW selects widely separated active keyframes according to geometric criteria without any consideration about the actual photo-consistency between the images of the map points in the selected keyframes. Hence, it is possible that some of the points do not render photo-consistent images, because they suffer, for example, from occlusions or scene reflections. To make our PBA robust with respect to this lack of photo-consitency, we propose an outliers management strategy based on the photometric error distribution, from which we derive the appropriate weights for Eq. \ref{eq:fullModel}. According to the probabilistic approach, optimizing the Eq. \ref{eq:fullModel} is equivalent to minimizing the negative log-likelihood of model parameters $\boldsymbol\xi$ given independent and equally distributed errors $r_k$, \begin{equation} \label{eq:MAP} \boldsymbol{\xi^*} = \argmin_{\boldsymbol\xi} - \sum_{k}^{n} \log p(r_k \mid \boldsymbol\xi) \end{equation} The minimum of Eq. \ref{eq:MAP} is computed equating their derivatives to zero. This is equivalent to minimizing the re-weighted least-squares Eq. \ref{eq:fullModel} with the following weights: \begin{equation} \label{eq:Weights} w(r_k) = - \frac{\partial \log p(r_k)}{\partial r_k} \frac{1}{r_k} \end{equation} Therefore, the solution is directly affected by the photometric error distribution $p(r_k)$ (see \cite{Kerl2013} for further details). Next we consider different distributions. \paragraph{Gaussian distribution} If errors are assumed to be normally distributed around zero $\mathcal{N}(0,\sigma_n^2)$, the model of error distribution is $p(r_k) \propto \text{exp}(r_k^2/\sigma_n^2)$. This model leads to a constant distribution of weights which is a standard least squares minimization. Thus, it treats all points equally and outliers cannot be neutralized: \begin{equation} \label{eq:normal_w} w_n(r_k) = \frac{1}{\sigma_n^2} \end{equation} \paragraph{Student's t-distribution} Recently, \cite{Kerl2013} has analyzed the distribution of dense photometric errors for RGB-D odometry. It showed that the t-distribution explains dense photometric errors better than a normal distribution, providing a suitable weight function: \begin{equation} \label{eq:t_dist_w} w_t(r_k) = \frac{\nu + 1}{\nu + (\frac{r_k}{\sigma_t})^2}, \quad \text{when} \; \mu=0 \end{equation} We have experimentally studied the sparse photometric errors and we conclude that the t-distribution also explains the sparse model properly (Fig. \ref{fig:dist_example}). In contrast to the normal distribution, the t-distribution quickly drops the weights as errors move to the tails, assigning a lower weight to outliers. Besides, instead of fixing the value of the degrees of freedom $\nu=5$ as in \cite{Kerl2013}, we study the behavior of the model when $\nu$ is fitted together with the scale $\sigma_t$ (see Sec. \ref{sec:results}). To fit the t-distribution, we minimize the negative log-likelihood of the probability density function with respect to $\nu$ and $\sigma_t$ using the gradient free iterative Nelder-Mead method \cite{LagariasJeffrey1998}. Besides, we filter out the gross outliers before fitting the t-distribution. We approximate the scale value $\hat{\sigma}$ using the Median Absolute Deviation (MAD) as $\hat{\sigma}=1.4826 \text{ MAD}$ and reject errors that $r_k > 3 \hat{\sigma}$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./images/dist_example} \caption{Probabilistic error modeling. The top row shows the case where most of the map points are photo-consistent, then both normal and t-distribution models fit well the photometric errors. The bottom row shows a challenging situation where covisible reobservations introduce many outliers due to occlusions, the t-distribution fits the observed errors better than the normal. On the left, the keyframe along with the point depth map after outlier removal.} \label{fig:dist_example} \end{figure} \paragraph{M-estimators - Huber} Whether the distribution of errors is hard to know or it is assumed to be normally distributed, using M-estimators is a popular solution. One of the most popular ones is the Huber estimator, since it does not totally remove high error measurements but it decreases their influence, which is crucial for reobservation processing. The Huber weighting function is defined as: \begin{equation} \label{eq:huber} w_h(r_k) = \begin{cases} \frac{1}{\sigma_n^2} & \text{if } |r_k| < \lambda \\ \frac{\lambda}{\sigma_n^2|r_k|} & \text{otherwise} \end{cases} \end{equation} where $\lambda$ is usually fixed or dynamically changed each time step with the value $\lambda=1.345\sigma_n$ for $\mathcal{N}(0,\sigma_n^2)$. In this case, Huber gives linear influence to the outliers. \subsection{Implementation of the probabilistic model into the PBA} We have studied the error distribution in each keyframe and concluded that there are differences between them. These variations might come from motion blur, occlusions or noise (see Fig. \ref{fig:dist_example} and the accompanying video). Hence, we fit the error distribution for each keyframe separately using all the observations from active points in that keyframe. This allows to adapt the PBA to different situations, e.g. certain error values might be considered as an outlier in a regular keyframe but inlier in a challenging one due to motion blur. Computing the error distribution and, thus, the weight distribution each iteration changes the objective function (Eq. \ref{eq:MAP}) and the performance of the optimization might degrade. We propose to compute the error distribution only at the beginning of each pyramid level and maintain it fixed during all the optimization steps. At the end of the PBA, the error distribution is recomputed again using the photometric errors obtained from the best geometry solution $\boldsymbol{\xi^*}$. \subsection{Outlier management} \label{sec:outlier} It is crucial to detect and remove outlier observations as soon as possible to maintain the PBA stability. To achieve this, we exploit the information from each observation, which includes measurements from eight different pixels. We propose to build a mask for each point and mark each pixel measurement $r_k$ as inlier or outlier. This helps handling points in depth discontinuities where other SLAM approaches typically struggle. To consider a pixel measurement as inlier, the photometric error has to be lower than the 95\% percentile of the error distribution of the target keyframe. For challenging keyframes the threshold will be higher, being more permissive, whereas for regular ones it will be lower, being more restrictive. When the current local PBA is finished, we count the number of inlier pixels in the mask. Whenever an observation contains a number of outlier pixels larger than a 30\%, the observation is marked as an outlier and removed from the list of observations of the point. Besides, during the optimization, if the number of outlier pixels is larger than a 60\%, the observation is directly discarded from the current optimization step, i.e. $w(r)=0$. We also detect and remove outlier points from the map. We propose to control the number of observations in each point to decide if it is retained. To retain a new point, it must be observed in all the new keyframes after its creation, when it has been observed in three keyframes it is considered a mature point. Mature points are removed if the number of observations falls below three. \section{Front-End} \label{sec:frontend} \paragraph{Frame Tracking} Each new frame is tracked against a local map, which is updated after every new keyframe decision. The local map is formed with active points from the LMCW referenced to the latest keyframe. The frame pose and its affine brightness transfer model are computed by minimizing Eq. \ref{eq:model} in which the map points and the latest keyframe remain fixed. The initial estimation is provided by a velocity model. We use a coarse-to-fine optimization, as proposed in the PBA, to handle initial guesses with large errors. We use the same robust influence function of Sec. \ref{sec:robustPBA} to reduce the impact of high photometric errors. In addition, we use the inverse compositional approach \cite{Baker2004} to avoid re-evaluating Jacobians each iteration and reduce the computational cost. \paragraph{New Keyframe Decision} Whenever we move towards unexplored areas, the map is expanded with a new keyframe. We use three different criteria with respect to the latest keyframe to decide if the tracked frame becomes a keyframe: \begin{enumerate} \item The map point visibility ratio between the latest keyframe and the tracked frame, i.e. $s_u = N^{-1}\sum \textnormal{min}(p_z/p'_z, 1)$, where $N$ is the total number of visible points in the latest keyframe, $p_z$ the point inverse depth in the latest keyframe and $p'_z$ the point inverse depth in the tracked frame. The score is formulated to create more keyframes if the camera moves closer. \item The tracked frame parallax with respect to the latest keyframe, defined as the ratio between the frame translation $\mathbf{t}$ and the mean inverse depth of the tracking local map $\bar{\rho}$: $s_t = \parallel\mathbf{t}\bar{\rho}\parallel_2$. \item The illumination change, measured as the relative brightness transfer function between the tracked frame and the latest keyframe, i.e. $s_a = |a_{k} - a_i|$. \end{enumerate} A heuristic score based on the weighted combination of these criteria determines if the tracked frame is selected as a new keyframe: $w_u s_u + w_{t}s_{t} + w_{a}s_{a} > 1$. \paragraph{New Map Point Tracking} During exploration, the system requires to create new map points. Each keyframe contains a list of candidate points that are initialized and activated if so decided. We initialize the inverse depth of these candidate points using consecutives new tracked frames. To do so, we search along the epipolar line to find the correspondence with minimum photometric error (Eq. \ref{eq:model}). Only distinctive points with low uncertainty will be activated and inserted into the PBA. Note that this delayed strategy requires several correspondences to obtain a good initialization as we are working with small baselines that render low parallax. To guarantee that we have enough initialized candidates to activate, we maintain candidate points from a keyframe until this is dropped from the temporal part of the LMCW. We only activate points that belong to image areas depleted from points (Sec. \ref{sec:lmcw}). Thus, when revisiting already mapped scene regions, only a few new points will be activated, as we will reuse existing map points. \section{Results} \label{sec:results} The proposed system is validated in the EuRoC MAV dataset \cite{Burri2015}. It has three scenarios, two rooms (V1, V2) and a machine hall (MH), with very challenging motions and changes in illumination. It also includes the 3D reconstruction ground-truth. We study the benefits of the VSLAM scheme of DSM with a version, DSM-SW (sliding-window), which uses only temporally connected keyframes as in \cite{Engel2016a}. We compare our approach against state-of-the-art algorithms such as ORB-SLAM \cite{Mur-Artal2015}, DSO \cite{Engel2016a} and LDSO \cite{Gao2018}. We evaluate the RMS Absolute Trajectory Error (ATE) and the Point to Surface Error (PSE). The ATE is computed using the keyframe trajectory for each sequence after Sim(3) alignment with the ground-truth. The PSE is estimated measuring the distance of the reconstructed model to the ground-truth surface after the trajectory alignment. The results are shown using normalized cumulative error plots, which provide the percentage of runs/points with an error below a certain threshold. These graphics provide both information about the accuracy and robustness of the evaluated method. All experiments are executed using a standard PC with an Intel Core i7-7700K CPU and 32 GB of RAM. \subsection{Parameter analysis and tuning} \label{sec:param_exp} This section presents an experimental analysis of the main parameters and options defining the DSM performance. To cover more cases, we run different experiments for left and the right cameras of the stereo rig, and both in the forward and in the backward direction. We run each sequence 5 times, for a total of 220 experiments. \subsubsection{Coarse-to-fine PBA} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{./images/pyramids} \caption{Number of pyramid levels $N_p$. RMS ATE (left) and processing times (right) compared with the RT (real-time) for different $N_p$.} \label{fig:pyramids} \end{figure} We evaluate the effect of changing the number of pyramid levels $N_p$ during the PBA. Fig. \ref{fig:pyramids} shows the results for DSM-SW and DSM. Without the coarse-to-fine scheme DSM-SW performs better than DSM. Here, DSM is not able to benefit from point reobservations due to the accumulated drift. However, DSM is able to reuse map points for higher number of pyramid levels and it clearly achieves better accuracy. While a coarse-to-fine strategy certainly increases the accuracy of DSM, there is significantly less improvement for DSM-SW. This is the expected behavior since DSM requires larger convergence radius to process reobservations while DSM-SW does not. Note how DSM is able to process approximately the 80\% of runs with a RMS ATE bellow 0.1m while DSM-SW only gets 40\% of runs. Moreover, we see that using $N_p=1$ with a sliding-window increases the performance. We also observe that increasing the number of levels after $N_p=2$ for DSM does not increase accuracy but increases the runtime significantly. Including reobservations in the PBA has little effect on the processing time. In contrast, the number of pyramids approximately increases the runtime by 50\% for each level. Thus, we use $N_p = 2$ as default which achieves the best balance between efficiency and accuracy. \subsubsection{Robust Influence Function} \begin{figure} \centering \includegraphics[width=0.38\textwidth]{./images/distribution} \caption{Robust influence function. Comparison of the RMS ATE between a Gaussian based M-estimator (Huber) and the t-distribution.} \label{fig:distribution} \end{figure} We study the effect of the selected model of weight distribution. Fig. \ref{fig:distribution} shows the results for the t-distrution and Huber models. In contrast to \cite{Kerl2013}, we evaluate the influence of the model when the degrees of freedom $\nu$ are estimated together with the scale $\sigma$. For Huber, we study when the constant is fixed to $\lambda=9$ and when it is dynamically changed with the MAD value. Interestingly, there is not significant difference between using fixed or dynamic values on both distribution models. However, the t-distribution performs better in challenging situations providing higher robustness than Huber. This comes from the fact that the t-distribution quickly drops the weights as errors move to the tails while the Huber model does not. We use the complete t-distribution model as default settings due to its flexibility handling challenging situations. \subsubsection{Number of covisible keyframes in the LMCW} \begin{figure} \centering \includegraphics[width=0.38\textwidth]{./images/LMCW} \caption{LMCW $N_w = N_t + N_c$. RMS ATE when changing the number of temporal $N_t$ and covisible $N_c$ keyframes.} \label{fig:LMCW} \end{figure} We observe that increasing the number of covisible keyframes $N_c$ increases the trajectory accuracy (Fig. \ref{fig:LMCW}.) With those covisible keyframes the PBA is able to handle point reobservations and to reduce the drift. However, the system requires temporally connected keyframes $N_t$ to guarantee the odometry robustness. Taking few temporal keyframes drastically reduces the accuracy. This is due to the fact that the temporal part ensures that new keyframes are well optimized and that enough new points are initialized during exploration. Thus, we use the combination of $N_t=4$ and $N_c=3$ as default settings, which achieves the best balance between precision and robustness. \subsection{Quantitative results} \label{sec:quant_exp} This section presents a comparison of DSM against ORB-SLAM \cite{Mur-Artal2015}, DSO \cite{Engel2016a} and LDSO \cite{Gao2018}. We report the results published in \cite{Mur-Artal2016a} for ORB-SLAM, in \cite{Engel2016a} for DSO and we use the open-source implementation for LDSO. All results are obtained using a sequential implementation without enforcing real-time operation using $N_w=7$ active keyframes for all direct methods. We run on default settings all sequences both forward and backward, 10 times each, using left and right videos separately for a total of 440 runs. \subsubsection{Trajectory error} \begin{table}[t] \caption{RMS ATE [m] using forward videos for left (l) and right (r) sequences. ($\times$) means failure and (-) no available data.} \label{tab:ATE} \centering \begin{tabular}{@{}L{0.85cm}C{0.9cm}C{0.9cm}C{0.9cm}C{0.9cm}C{0.9cm}C{0.9cm}@{}}\hline\hline Seq. & ORB-SLAM \cite{Mur-Artal2015} & DSO \cite{Engel2016a} & LDSO \cite{Gao2018} & DSM-SW & DSM & DSM (Global PBA) \\\hline MH1\_l & 0.070 & 0.046 & 0.053 & 0.054 & \textbf{0.039} & 0.042 \\ MH2\_l & 0.066 & 0.046 & 0.062 & 0.041 & \textbf{0.036} & 0.035 \\ MH3\_l & 0.071 & 0.172 & 0.114 & 0.123 & \textbf{0.055} & 0.040 \\ MH4\_l & 0.081 & 3.810 & 0.152 & 0.179 & \textbf{0.057} & 0.055 \\ MH5\_l & \textbf{0.060} & 0.110 & 0.085 & 0.139 & 0.067 & 0.054 \\ V11\_l & \textbf{0.015} & 0.089 & 0.099 & 0.099 & 0.095 & 0.092 \\ V12\_l & \textbf{0.020} & 0.107 & 0.087 & 0.124 & 0.059 & 0.060 \\ V13\_l & $\times$ & 0.903 & 0.536 & 0.888 & \textbf{0.076} & 0.068 \\ V21\_l & \textbf{0.015} & 0.044 & 0.066 & 0.061 & 0.056 & 0.060 \\ V22\_l & \textbf{0.017} & 0.132 & 0.078 & 0.123 & 0.057 & 0.053 \\ V23\_l & $\times$ & 1.152 & $\times$ & 1.081 & \textbf{0.784} & 0.681\\ \hline MH1\_r & - & \textbf{0.037} & 0.050 & 0.054 & 0.045 & 0.039 \\ MH2\_r & - & 0.041 & 0.051 & 0.039 & \textbf{0.039} & 0.034 \\ MH3\_r & - & 0.159 & 0.095 & 0.187 & \textbf{0.048} & 0.035 \\ MH4\_r & - & 3.045 & 0.129 & 0.188 & \textbf{0.058} & 0.052 \\ MH5\_r & - & 0.092 & 0.087 & 0.131 & \textbf{0.064} & 0.052 \\ V11\_r & - & 0.047 & 0.662 & 0.031 & \textbf{0.014} & 0.012 \\ V12\_r & - & 0.080 & 0.208 & 0.118 & \textbf{0.046} & 0.043 \\ V13\_r & - & 1.270 & 0.642 & 1.313 & \textbf{0.045} & 0.037 \\ V21\_r & - & \textbf{0.027} & 0.040 & 0.032 & 0.034 & 0.030 \\ V22\_r & - & 0.059 & 0.068 & 0.314 & \textbf{0.057} & 0.052 \\ V23\_r & - & 0.540 & \textbf{0.171} & 0.889 & 0.528 & 0.482 \\ \hline\hline \end{tabular} \end{table} Table \ref{tab:ATE} reports the median errors for each forward sequence. Overall, we see that DSM-SW performs similarly to DSO. This is expected since both methods are based on the same sliding-window approach without a multiscale PBA. However, DSM-SW successfully executes all MH sequence, while DSO fails in MH\_03\_medium. This is probably due to the use of a more robust influence function in DSM-SW. DSM achieves higher accuracy in almost all sequences compared to the rest of direct approaches, DSO, LDSO and DSM-SW. DSO and LDSO only achieve slightly higher accuracy in a few sequences. ORB-SLAM obtains better results in V1 and V2, but DSM achieves the best performance for the MH sequences. Note that in contrast to ORB-SLAM, we do not incorporate any place recognition, pose-graph or relocalization modules. This shows the high precision of DSM is due to point reobservations and proves that DSM can achieve with only 7 keyframes comparable results to ORB-SLAM that uses tens of cameras in the local BA. In the sequence V1\_03\_difficult, DSM achieves an RMS ATE of only 7.6cm, which is by far the best performance among all the approaches tested. This sequence contains very rapid motions and illumination changes, which demonstrates the robustness of the proposed method. Besides, we successfully manage to complete all sequences and obtain an RMS ATE bellow 0.1m for all of them, except V2\_03\_difficult, where all of the compared approaches fail. In addition, we have also evaluated the improvement due to a final global PBA at the end of each sequence. We have observed that the global PBA converges in a few iterations and only improves slightly the RMSE ATE, but with a significant increase in the computational cost. For instance, in the sequence V2\_02\_medium the global PBA optimizes fifty times more parameters with a processing time two orders of magnitude higher. The accuracy of the proposed direct local mapping scheme is very close to the result of a global PBA, but at a small fraction of the computational cost. \subsubsection{Mapping vs Pose-Graph} \begin{figure} \centering \includegraphics[width=0.37\textwidth]{./images/colormap} \caption{Full evaluation results. For each sequence (X-axis) we plot the RMS ATE [m] in each iteration (Y-axis), with a total of 440 runs.} \label{fig:colormap} \end{figure} \begin{figure} \centering \includegraphics[width=0.37\textwidth]{./images/ATE_accumulated} \caption{RMS ATE for LDSO and DSM.} \label{fig:ate_accum} \end{figure} Comparing LDSO and DSM shows the differences in using a VO scheme with a pose-graph in contrast to a VSLAM scheme. Fig. \ref{fig:colormap} shows the RMS ATE for all the evaluated sequences for LDSO and DSM. Overall, we observe that DSM achieves better accuracy. We also see that reusing existing map points allows completing successfully a higher percentage of sequences. We build a persistent map and reuse map points to support the odometry estimation instead of permanently marginalizing all points that leave the local window. This can also be observed in Fig. \ref{fig:ate_accum}. While DSM is able to process 80\% of sequences with an RMS ATE bellow 0.1m, LDSO can only handle 50\% of runs under this limit. Moreover, we have observed that in some sequences LDSO misses many available loop closures due to lack of feature matches. This makes the odometry drift until a larger correction loop is detected, causing a temporally inconsistent trajectory and structure estimations. Fig. \ref{fig:MapVSGraph} shows the evolution of the RMS ATE along the trajectory. It can be seen the effect of missing loop closures with a feature-based pose-graph strategy. In contrast, building a persistent map enables reusing existing map information continuously, which maintains the trajectory accuracy stable in time. Although the final RMS ATE is similar in both systems, the odometry using a VSLAM approach is more accurate and, thus, more reliable. This clearly shows that using a VSLAM scheme provides better accuracy performance compared to a VO scheme with a pose-graph. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./images/map_vs_graph} \caption{VSLAM vs VO + Pose-Graph. RMS ATE after processing each keyframe in the trayectory. It shows the time evolution of the error. While a feature-based pose-graph strategy may miss many loop closures, a VSLAM scheme continuously reuses existing information to provide more accurate and reliable estimates in time.} \label{fig:MapVSGraph} \end{figure} \subsubsection{Map error} Fig. \ref{fig:StructError} shows the distance between the reconstructed points and the ground-truth surface. We compare all the sequences against LDSO except in V2\_03\_difficult where LDSO fails. Clearly, incorporating map point reobservations into the PBA increases not only the trajectory accuracy but the reconstruction precision too. Although the final trajectory RMS ATE is similar in some sequences, such as in V1\_01\_easy, the map is without a doubt more accurate in DSM. Besides, we have observed that LDSO creates ten times more points than DSM for these sequences, due to the fact that DSM reuses existing map points avoiding duplications. \begin{figure} \centering \includegraphics[width=0.43\textwidth]{./images/struct_error} \caption{Map error. For each scene we show the accumulated PSE distribution using all the reconstructed 3D points for all runs. Solid lines (---) present easy sequences, dashed lines (-{}-{}-) medium and dotted lines ($\cdot\cdot\cdot$) difficult ones for each scene.} \label{fig:StructError} \end{figure} \subsubsection{Processing time} \begin{table}[t] \caption{Processing time and keyframe frecuency.} \label{tab:Time} \centering \begin{tabular}{c|rrr}\hline\hline Operation & Median [ms] & Mean [ms] & St.D. [ms] \\\hline Frame \& Point Tracking & 7.44 & 7.45 & 0.31 \\ Local PBA & 888.77 & 908.53 & 121.10 \\\hline Keyframe Period & 396.28 & 397.22 & 177.51 \\ \hline\hline \end{tabular} \end{table} Table \ref{tab:Time} reports the processing time required for each part of the method, as well as the used keyframe period time. In our current initial implementation, PBA is the bottleneck of the processing cost. We observe that it should be twice faster to obtain the required keyframe creation rate. It is possible to improve the runtime significantly using SIMD instructions to process each patch. Besides, many of the operations can be parallelized as they are independent for each point. We believe using these upgrades could make DSM run in real-time applications since the mapping thread is not required to run at frame rate but at keyframe rate. \subsection{Qualitative results} \label{sec:qual_exp} Fig. \ref{fig:example_V21} and Fig. \ref{fig:qual_example} show some 3D maps obtained with DSM. In contrast to sliding-window based approaches incorporating covisibility constraints avoid duplicating points and builds a consistent map. DSM estimates a precise camera trajectory and 3D reconstruction even in the most difficult sequences such as V1\_03\_difficult and MH\_05\_difficult (see accompanying video). \begin{figure*} \centering \includegraphics[width=0.78\textwidth]{./images/qual_example} \caption{Qualitative examples. V1\_03\_difficult (left) and MH\_05\_difficult (right) sequences. The trajectory is displayed in red.} \label{fig:qual_example} \end{figure*} \section{Discussion \& Future work} We have demonstrated the benefits of building a persistent map instead of just estimating the camera odometry with a temporary map. Both the accuracy of the trajectory and the reconstructed map improve by reusing map information in the photometric model. DSM manages to process scene reobservations and successfully completes 10 out of 11 sequences with an RMS ATE below 0.1m in the challenging EuRoC dataset without requiring any loop closure detection and correction. During long-term sequences in the same environment DSM provides reliable estimates as long as point reobservations are successfully processed. It would be interesting to add map maintenance strategies, such as removal of redundant keyframes and points, to ensure long-term operation efficiency and allow to perform a feasible global bundle adjustment as in \cite{Mur-Artal2015}. Besides, we have shown that the t-distribution fits well the sparse photometric errors, yielding a more robust PBA. However, it would be interesting to evaluate it against other alternatives such as the Cauchy M-estimator. Even with a persistent map, it is not possible to handle all reobservations in all situations. In large trajectory scenarios, the accumulated drift makes it impossible to detect map point reobservations with geometric techniques alone. Sometimes map point reobservations do not even fall in the camera field of view due to the large drift, e.g. in a highway loop. In these cases, a place recognition module, which exploits the image appearance, would be useful to detect loop closures. Then, a pose-graph optimization will serve as an initialization for the PBA. Therefore, we believe that combining map reuse capabilities with a place recognition module, such as previously done with indirect techniques in \cite{Strasdat2011,Mur-Artal2015}, is the best alternative. In any case, we think that a pose-graph should only be used as a coarse initialization technique for the PBA, which is the optimization technique that actually exploits all the available geometric information in a VSLAM system. \section{Conclusion} In this work, we have presented a novel fully direct VSLAM method which is capable of building a persistent map by reusing map points from already visited scene regions. To obtain this, we have presented a new local window selection strategy using covisibility criteria, which enables to include map point reobservations into the PBA. We have demonstrated that a coarse-to-fine strategy is required to process point reobservations with the photometric model. In addition, we have incorporated a robust influence function based on the t-distribution which increases the robustness of the whole system against spurious observation. As a result, we use the same objective function and map points for all the operations in the system. We demonstrate in the EuRoC MAV dataset that the proposed method reduces both the estimated trajectory and map error while avoiding inconsistent map point duplications at the same time. \ifCLASSOPTIONcaptionsoff \newpage \fi \section*{Acknowledgments} We would like to express our gratitude to Prof. J.D. Tard\'os for the fruitful discussions and sensible advice. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-06-02T02:09:20", "yymm": "1904", "arxiv_id": "1904.06577", "language": "en", "url": "https://arxiv.org/abs/1904.06577" }
\section{Introduction} In a nutshell, open science refers to the movement of making any research artefact available to the public. This ranges from the disclosure of software source code (``open source'') over the actual data itself (``open data'') and the material used to analyse the data (such as analysis scripts, ``open material'') to the manuscripts reporting on the study results (``open access'').\footnote{Open science and open scholarship encompass a wide range of topics and activities, many of which are described by Tennant et al.~\cite{Tennant2019} In this chapter, we concentrate on topics we believe to be in scope of (empirical) software engineering, namely open access, open data, open materials, open source, open peer review, and registered reports}. Disclosing research artefacts increases transparency and, thus, reproducibility and replicability of our scientific process and our results. Open science is often seen as an important means to move forward as a scientific research community. Open data and open source -- both being major principles under the common banner of open science -- constitute a major hallmark in making empirical studies transparent and understandable to researchers not involved in carrying out those studies. This can be done, for example, by sharing replication packages that capture the raw data and anything necessary for their analysis and interpretation. That way, we increase the reproducibility of our research. This, in turn, strengthens the credibility of the conclusions we draw from the analysed data and it allows others to build their own work upon ours; hence, it strengthens more generally our overall body of knowledge in the research community. Besides these more ideological views on open science and reasonable arguments in favour of engaging into it as a research community, on which any reader will probably agree, there is much more to it which we need to understand when considering open science in the context of software engineering research. There are, for example, various challenges in data disclosure -- technical ones, ethical and legal ones, but also social ones -- which are different to the standards and views given in other disciplines and which make open science difficult to become the norm in our own field. Consider, for example, the notion of repeatability, replicability, and reproducibility by considering the terminology as introduced by the ACM\footnote{\url{https://www.acm.org/publications/policies/artifact-review-badging}} (verbatim): \begin{itemize} \item \textbf{Repeatability (Same team, same experimental setup):} The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation. \item \textbf{Replicability (Different team, same experimental setup):} The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author's own artefacts. \item \textbf{Reproducibility (Different team, different experimental setup):} The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artefacts which they develop completely independently. \end{itemize} As an engineering discipline heavily inspired by the natural sciences, we often make implicit assumptions that our focus is on quantitative and even purely computational studies (e.g. simulations). For these, existing definitions and norms hold as they are and we are able to yield replicability and reproducibility. This situation is, however, not the norm. Most studies in software engineering involve -- in one form or another -- humans. In the end, software is made by human beings for human beings. Human subjects, however, act purely rational in exceptional cases only, if at all~\cite{Lambert06}. This means that every change in an experimental context, even if strictly following the same experimental setup and procedure, will eventually yield different (context-dependent) results. Such studies would then not fit the available definition of reproducibility as used in computational studies, but it is still reasonable to argue that they would be reproducible. Further challenges in software engineering research are that much of our data emerges from sensitive (e.g. industrial) settings and finally the reliance upon qualitative data where the data analysis is less procedural when compared to quantitative data (also imposing significant integrity challenges). All this renders full disclosure often difficult and we often need to anonymise the data to act within legal and ethical constraints that most computational studies do otherwise not have. Those two facets of software engineering research alone show already that we need to adapt open science principles to the particularities of our discipline, same as it is the case in other disciplines. How can our software engineering community of researchers adopt its own open science movement? We believe that it is a lack of proper understanding about \begin{itemize} \item what open science is (and what it isn't) for software engineering, \item why we should all do our best to implement it, whether as editor, chair, or as researcher, and finally \item how we could and should do it \end{itemize} that often leads to a general reluctance towards implementing open science. Sometimes, it even leads to a general dismissal of the potential open science has for individual researchers and the community as a whole. All this renders our own open science movement cumbersome. In this chapter, we cover the essentials in open science for software engineering. In particular, we establish a common ground in our discipline by elaborating on established key terms, principles, and approaches in Sect.~\ref{sec:what} -- all tailored to the particularities of our discipline. We further discuss why we should engage in open science (Sect.~\ref{sec:why}) before discussing practical guidelines to implementing open science in Sect.~\ref{sec:how}. In Sect.~\ref{sec:challenges}, we then end with a discussion of chosen challenges and pitfalls. The latter is based on our shared experiences emerging from open science activities and lessons we learnt so far as authors and as organisers where we implemented first open science initiatives in the empirical software engineering community. The main target audience consists of software engineering scholars interested in the general notion of open science and those interested in implementing open science in their own research practices. One hope we associate with this chapter is not only to oppose those critical voices still sceptical towards open science, but also to strengthen the voices of those supporting it out of the firm conviction that open science should soon become the norm in software engineering research, too. \section{What is Open Science?} \label{sec:what} Open science is a movement whose aim is to render all artefacts borne out of scientific research activities accessible, without any barriers, to any individual on Earth~\cite{Woelfle2011}. Open science refers also to the scientific part of the broader terms of open scholarship, i.e. ``the process, communication, and re-use of research as practised in any scholarly research discipline, and its inclusion and role within wider society''~\cite{Tennant2019}. Open science itself is an umbrella term that encompasses several facets of openness, for example open access, open data, open source, open government, open notebooks, or open standards~\cite{FOSTER2019}. In the following, we discus those concepts particularly relevant to the (empirical) software engineering research community. \subsection{Open Access\label{ssec:whatis:open-access}} Open access is associated with publications, i.e., research articles, technical reports and papers in general. Open access occurs whenever a publication is freely available on the public Internet without any access barrier -- financial, legal or technical ones (including even not to force users to register to systems). It allows individuals to read, download, copy, distribute, print, search or link to the full texts of publications for any lawful purpose~\cite{BOAI2002}. Minor constraints over redistribution and reuse of the publication may still apply and usually take the form of attribution. It is typical with open access publications that the authors retain the copyright of their work, and the act to render the work as open access is enabled through proper licences. The \emph{Creative Commons} licence model is the most widely employed one for open access (see also Sect.~\ref{ssec:whatis:open-data}). Open access can take several forms according to which version of a publication is made public and at which point of the academic writing process it is made public. If authors make an own produced copy of their work openly available, they perform an act of \emph{self-archiving}. The work is called \emph{preprint} if it reflects a version of their manuscript that has not yet been accepted for publication at a scientific venue. If the content of the own produced work is identical to the content of the accepted publication, it is called \emph{postprint}. The only differences between the \emph{postprint} and the manuscript formally published by a traditional publisher like ACM, IEEE, or Springer is in typesetting differences and the location of the document. The location of pre- and postprints is typically an open repository for pre- and postprints, in contrast to the digital libraries of the publishers. One such example is given in the following while we will go more into detail in Sect.~\ref{sec:how}. \begin{question}{Self-archiving via arXiv} arXiv, pronounced as \emph{archive} and available at \url{https://arXiv.org}, is a repository, born in 1991, of freely accessible preprints and postprints, as well as whitepapers, covering several scientific fields including physics, mathematics, and computer science~\cite{ginsparg2011twenty}. arXiv is free to access, to register to, and to submit to, but it presents two safe guards for publishing. First, authors have to be endorsed by existing members before they are allowed to register in the system. Second, every submission is moderated by volunteers who check for issues such as scope or copyright. arXiv is the de-facto standard repository for mathematics and physics, and with some authors only publishing their work in there, it receives more than 10,000 submissions per month and is, at the time of writing this chapter, hosting approximately 1.5M manuscripts in a distributed archived system of multiple digital libraries all over the world. \end{question} The act of self-archiving is also known as \emph{green open access} and it is allowed by the majority of academic publishers with some regulations. \begin{question}{Self-archiving options and publishers' regulations} Different publishers define different regulations with effect to the needs and possibilities of self-archiving, and it is imperative to strictly adhere to these rules. The SHERPA partnership, a partnership of several universities with the original goal of setting up an institutional open access repository, offers with \emph{RoMEO} -- \url{http://www.sherpa.ac.uk/romeo} -- a tool summarising publishers' copyright and archiving policies. RoMEO distinguishes different categories via the following colour codes commonly adopted also in the wider sense: \begin{itemize} \item \textbf{White:} Self-archiving not formally allowed \item \textbf{Yellow:} Authors can archive preprints (i.e. pre-refereeing) \item \textbf{Blue:} Authors can archive postprints (i.e. final draft post-refereeing) or publisher's version/PDF \item \textbf{Green:} Authors can archive preprint and postprint or publisher's version \end{itemize} \end{question} Whenever a publisher renders an accepted publication as openly licensed and available without any restriction whatsoever, the artefact becomes open access under the \emph{gold open access model}. This model often follows an author-pays strategy, but there exist also publishers asking for no article processing charges at all. We refer the reader to the work of Graziotin et al.~\cite{Graziotin2014} for more information on open access and its publishing models. \subsection{Open Data\label{ssec:whatis:open-data}} Open data is very similar to open access, but it is applied to any data that was produced in the course of research activities, such as the raw data obtained via a controlled experiment. Openness of data can come in various forms and at different degrees; for instance, while an abstract description of a data set (meta data) could be found and accessed online, it could still be the case that access to the full data set would only be granted upon request and only for specific research purposes carefully selected and laid out by the owners of that data set. Here, we point to the FAIR principles\footnote{See also ~\url{https://www.force11.org/group/fairgroup/fairprinciples}} which describe how data should ideally be made open: When data sets are Findable, Accessible, Interoperable, and Reusable, we refer to it as ``FAIR data''. In general, open (FAIR) data follows the idea that research data should be freely available to everyone to use and redistribute as they wish, without any restriction whatsoever born out of copyright and licences~\cite{Auer2007}. As with open access, the Creative Commons deeds are commonly employed licences for open data. \begin{question}{Creative Commons (CC) copyright licences} Creative Commons copyright licences (see \url{https://creativecommons.org/licenses/}) constitute a public licence model with the aim to facilitate granting copyright permissions to published work. The two most employed Creative Commons deeds are the Public Domain (CC0, ``No rights reserved'') and the Attribution 4.0 (CC BY 4.0) licence. The former is a licence that implements true public domain, effectively acting as a renounce of any copyright on the artefacts. The latter is an open licence that allows reuse and redistribution of the artefact with the only condition of attributing the original work to the authors. \end{question} Besides the frequently used CC licence models introduced above, further ones are possible, too. One example is the Attribution-NonCommercial 4.0 (CC BY NC 4.0), which adds the clause that the original artefact and any derivation of it cannot be used for commercial purposes. While the Public Domain and the CC BY NC licences might seem more suitable for academic work, opting for them can be problematic as we explain in Sect.~\ref{ssec:challenges:right-license}. \subsection{Open Source} Open source in open science is nothing different to open source software as it is commonly known by the computer science community. In fact, many argue that the open source software movement served as an inspiration for more openness in various fields going beyond software-related ones (see also the work by Boisseau et al.~\cite{boisseau_omhover_bouchard_2018} providing an elaborate discussion). In any case, several research endeavours in computer science and empirical software engineering, but also other disciplines as well, produce software. One such example is what is often referred to as \emph{research software} (or scientific software), i.e. software products developed with the purpose of analysing (empirical) data, such as Python code. In principle, the software developed can be released as open source software using known licences such as the MIT licence or the GPLv3. \subsection{Preregistration of Studies} Preregistration is a useful tool to ensure a certain level of quality of a study design, e.g. by making sure that hypotheses of a confirmatory study were actually pre-defined rather than being defined after having analysed the data to fit the results. Researchers define what their research questions are, why they want to pursue the research, and how exactly they will try to answer their questions. The Open Science Framework is currently one of the most common places to preregister research projects (see \url{https://osf.io/prereg/}). Some journals have reported already how preregistration avoids \begin{itemize} \item publication bias \cite{Dickersin1990}, \item p-hacking \cite{Head2015extent}, and \item HARKing (Hypothesizing after the results are known \cite{Kerr1998harking}). \end{itemize} These journals offer the possibility of submitting a \emph{registered report} to their journal.\footnote{For a guide on writing registered reports, we refer the reader to \url{https://osf.io/8mpji/}} Such a report goes through peer review and, provided acceptance, the report is \emph{in principle accepted} (IPA). If the researchers conduct the study as indicated in the registered report, their paper will be published in the journal regardless of the results. \subsection{Open Science Badges} For every form of open science, publishers can award \emph{open science Badges}. Badging is a form of promoting open science activities of researchers via a specific badge that publicly recognises their open science engagement. To this end, publishers associate a specific symbol (i.e. a badge) to chosen artefacts to certify that the content is available and accessible in a persistent location. There exist various forms of badges obeying the particularities of the various available badge systems. Some of them are publisher-specific (such as the ACM badge system\footnote{See \url{https://www.acm.org/publications/policies/artifact-review-badging}}) and some of them are independent, such as the OSF Open Science Badges. \begin{question}{OSF Open Science Badges} A wide-spread open science badge system is the one introduced by the Open Science Foundation (OSF, \url{https://osf.io/}) and further promoted by the Center for Open Science (\url{https://cos.io}). This model distinguishes between badges in the following categories: \begin{itemize} \item \textbf{Open Data:} This badge is awarded when shareable data necessary to reproduce a study are made publicly (digitally) available. \item \textbf{Open Materials:} This badge is awarded when making available the materials of the followed research methodology necessary to reproduce or replicate that followed methodology (e.g. analysis scripts). \item \textbf{Preregistered:} That badge is awarded when preregistering a study design including the description of the research design and study materials. \end{itemize} \end{question} How to award which badges depends on many (often non-trivial) criteria defined by editors and following a specific reviewing model to check the eligibility to obtain the badges. Although badges are, at the time of writing this chapter, rather rare in software engineering research (such as badges for preregistered studies) and although some systems may still be perceived as difficult to implement (such as the ACM system due to the wide spectrum of often overlapping badges), badges are generally recognised to be a valuable incentive that increases the participation in open science initiatives~\cite{rowhani2017incentives}. Hence, they are being adopted more and more by journals and conferences. \subsection{Open Peer Review\label{ssec:whatis:open-peer-review}} Different models of peer review exist and have been experimented with lately~\cite{Tennant2017}. One of these is open peer review, for which there is, however, yet no commonly accepted and clear definition nor an agreed schema as elaborated in a secondary study by Ross-Hellauer~\cite{RossHellauer2017}. Open peer review implementations intend to make the review process as transparent as possible and can feature factors ranging from removing the anonymity of authors and reviewers alike, over making the actual reviews public and allowing for interaction between authors and reviewers, to crowdsourcing reviews and even making manuscripts public before the review phase. One least common denominator of open peer review focuses on the names of authors and reviewers so that both can see each others' identities. This allows for authors and reviewers to have a direct conversation rather than having to go through third parties for communication purposes (e.g. via handling editors or chairs). In the programming community, this type of review process has long been known in code reviews, but -- despite the advantages recognised in the research community as shown in a recent study on the future of peer review in software engineering~\cite{prechelt2018community} -- it is not yet adopted by our journals and conferences (see also Sect.~\ref{sec:challenges}). One exception is the Journal of Open Source Software.\footnote{For details, see \url{https://joss.readthedocs.io/en/latest/submitting.html#the-review-process}} Another definition focuses on disclosing the reviews -- sometimes with the names of the reviewers. That way, reviewers can be held more accountable, but they can also serve to make the decision for acceptance more transparent to others and the reviewers can also claim the recognition they deserve. There are many fears and hopes around open peer review models, many of which are discussed in an editorial of the European Journal of Neuroscience after having implementing such a model \cite{doi:10.1111/ejn.13762}. One fear (for which, however, there is no evidence yet) is the risk that early career researchers might be more reluctant to provide profound critique if their names are revealed (see also our discussion in Sect.~\ref{sec:SharingPreprints}). A partial implementation of this model where reviewer names and their reviews are made public is followed by the PeerJ Computer Science Journal, which asks the reviewers whether they wish to disclose their name and subsequently to the authors if whether they wish to disclose the peer review history in the published paper. \section{Why do we need Open Science?} \label{sec:why} Open science is becoming more and more accepted in scientific communities to be having many positive effects. These effects range from increased access and citation counts~\cite{eysenbach2006citation} to facilitating technology transfer with the industry and fostering collaborations through open repositories. Academic publishing and knowledge sharing is meant to become more cost-effective -- German university libraries alone are estimated to be spending well beyond 200 million EUR on publication subscriptions fees per year~\cite{schimmer2015disrupting} -- and researchers and practitioners with no publisher subscriptions can freely access and build on the work of others. There are many discussions and controversies centred around publisher subscription models and how institutions (and institutional alliances) should deal with them. In this chapter, we will not even try to address these discussions to the extent they deserve, but provide a broader view on why we do need open science in general. Imagine the following situation: A conference author submits a manuscript promising to have provided scientific and empirically-informed arguments for considering Go To statements harmful; a statement previously relying on rationalist arguments of software engineering pioneers like Dijkstra~\cite{Dijkstra68} only. As laid out by that author, those arguments emerge from the exploration of industrial source code -- which the author does not share, maybe because of non-disclosure agreements with collaborating companies from which the data emerges, or maybe for other reasons; this statement is not made explicit in the manuscript. They have further analysed the impact of those statements based on in-depth interviews -- which the author does also not share, maybe because of ethical and legal constraints. Imagine further that the reviewers find no obvious methodological flaws in the design which the author describes in great detail for both the content analysis and the interviews. The author is an experienced and recognised authority in the research community and the manuscript is written in an easy-to-follow manner. The reviewers further find the manuscript ``compelling'', ``interesting'', and the results are also ``surprising'' to them given the availability of contrary evidence provided by other authors who previously analysed publicly available software repositories coming to very contrary conclusions~\cite{NRY+15}. Even if the submitting author did not discuss that other publication in detail, a presentation of that work would certainly lead to controversial and interesting discussions; something the reviewers believe to merit presentation at the prestigious conference they review for. So they recommend acceptance and the PC chairs select that publication for inclusion in the program. It is reasonable to believe that many readers of this chapter having served as co-chairs and reviewers for conferences can identify with such a situation. Now imagine you were a young scholar analysing the effects of software defects and you find this publication. You would certainly find this publication interesting as it could provide a useful ground for follow-up work. Ask yourself -- honestly -- the following questions: \begin{itemize} \item Would you trust the results? If so, based on what? The simple fact that it has been accepted by the prestigious conference? The way the manuscript is generally written? The name of the author or her or his affiliation? Maybe based on the high number of citations that this publication already has? Maybe it is a combination of all factors? Would the picture change if the author would be unknown to you and if the work would have been published at a lower ranked conference? \item Would you be able to really comprehend how the study has been carried out? Would you be able to reproduce the conclusions drawn by the author based on the insights provided in the manuscript? Would you be able to replicate the study in your own research environment? \item To what extent does that piece of work provide a good theory for your work? Would this theory be robust and reliable (i.e. scientific)? Would you consider it useful? \item How would you use the work if you could only access the abstract of the manuscript because it is hidden behind a paywall and because your institution has no subscription? Would you cite the work based on the information in the abstract? Maybe based on the statements found in other papers citing that work? \item How would you cite that work and put it in relation to your own research? Would the picture change in dependency to whether the statements in that manuscript support your own arguments or whether it contradicts them? \end{itemize} This very example certainly describes a fictitious situation and yet it describes in many ways the de-facto situation of software engineering research. Scientific practices are -- and they need to -- rely on certain safe guards, such as peer review, but they are nevertheless also dictated by social and political mechanisms and many non-trivial, subjective factors in the research communities. These factors very often dictate in one form or the other which submissions eventually make it into the publication landscape and which do not, and which publications are cited and which are not. As a consequence, publication and citation regimes -- although inherently rooted in scepticism -- have also much to do with trust and convictions~\cite{MendezPassoth18}; something which holds for most, if not all, scientific disciplines. Transparency is therefore key to break with scientific theories being grounded in common sense, taken-for-granted knowledge, hopes, convictions, and provisional beliefs. Software engineering still faces many challenges other scientific disciplines do not face. Our data comprehends qualitative and quantitative data types and the theories we work on often have various disciplinary backgrounds (from mathematics over psychology to sociology). Further, our data very often emerges from highly sensitive environments making a disclosure difficult and in many cases impossible. Even if we can disclose the data, in many cases it has to be anonymised to an extent it becomes difficult to fully comprehend. All this renders building and evaluating empirically grounded theories in our field difficult. Hence, scientific practices often remain rooted in trust rather than being rooted in transparent scientific processes. Yet and as laid out by Mendez and Passoth~\cite{MendezPassoth18}, it is theory building which constitutes a crucial foundation to our avenue towards turning our engineering discipline into a more scientific, evidence-based one, same as it was the case for many other disciplines before. Transparency, credibility, and reproducibility are cornerstones in building and evaluating robust and reliable theories for our still emerging field and open science provides a solid foundation to achieve that goal. In essence, open science practices in general and data sharing in particular eventually allow us as a community of software engineering researchers and practitioners to effectively make contributions to our body of knowledge based upon shared data sets -- making our empirical studies transparent, comprehensible, and credible -- thus, we move forward as a community. As we argue, not only scientific publishing is essential in knowledge sharing and dissemination~\cite{houghton2010economic}, but it is an essential facet in accumulating knowledge via a variation of studies tackling the same or similar questions and building upon the same or similar settings and data sets -- e.g. as part of replication studies~\cite{GJV10} which are rendered difficult if not impossible without clear open science principles dictating shared values and principle scientific practices. Therefore, there is no doubt anymore \emph{whether} open science will become the norm also in software engineering research. Ever more public and private funding bodies are implementing open access and open data policies~\cite{childs2014opening, van2011managing}. Also the research community is in tune with with this movement, as we can observe: editors and conference organisers are already planning for a smooth transition to open data, and reviewers are becoming more and more sceptical towards manuscript submissions which do not disclose their data and, consequently, ask the reviewers for too much credit. It remains, however, often still a question of \emph{how} the community should adopt open science practices and how individual researchers should open their research. We discuss this question in more detail in the next section. \section{How do we do Open Science?} \label{sec:how} In the following, we address the question of how to engage in open science. There are many aspects to consider when engaging as a researcher in open science. We believe that these aspects are best introduced along a simple (again, fictitious) scenario introduced next. The goal is to show demonstrate opportunities along an exemplary set of practices and techniques available to engage in open science in a hands-on manner. \subsection{Exemplary Scenario} As an exemplary scenario, we consider a research project where we are researchers at European universities collaborating with project partners from other universities in the United States. Those partners are researchers in psychology. Our project aims at conducting a psychometric software engineering study and our overall goal is to collect data involving a large-scale study with human subjects. The research design is done in a joint effort. While our partners are largely responsible for the study execution and the data collection, we are largely responsible for analysing the data and reporting on it. To keep the example simple, we focus on the statistical analysis of quantitative data in our study, but also refer the reader to the challenges emerging from the disclosure of qualitative data in Sect.~\ref{sec:challenges}. \subsection{Overall Data Analysis Process} Figure~\ref{fig:project} depicts, on the left side, the steps followed in our data analysis with a particular focus on those aspects relevant from an open science perspective. Overall, we first prepare our data and check for any errors, inconsistencies, and missing values, and we discuss these with our partners. At the same time, we start thinking about how to best answer our questions at hand. While we design our analysis procedure, we update the data structure to best fit the analysis plan. Once the analysis plan is finalised, we make it openly available. Ideally, we submit it as a \emph{preregistered study}. This submission includes our study protocol and the material (analysis scripts) as well as a detailed sample description allowing reviewers to judge upon the potential of the study with respect to its theoretical and practical impact. After registering our study and considering the feedback received, only then, we decide to begin with the data analysis. After discovering no clear patterns in the data, we decide to participate at a workshop where we present our ongoing work based on a previously published short paper describing the overall goal of the study and preliminary results. This work in progress presentation serves the purpose of receiving further feedback from the research community and of getting useful ideas on how to improve our data visualisation techniques. After successfully finishing our data analysis, we finally write up our main publication on the project and disclose our manuscript preprint prior to submitting our manuscript for review to a journal. \begin{figure}[htb!] \centering \input{project_tikz.tex} \caption{Schema of an exemplary simple project.} \label{fig:project} \end{figure} In the following, we walk through that process while focusing on the infrastructure and tools. Our hope is that by presenting the process in such a pragmatic hands-on manner allows to fully reproduce the process as it should typically appear in a research setting. \subsection{Exemplary Walk-through} There are various tools to be used to make our project open and reproducible. While we do not claim to be able to present an exhaustive list here, our aim is to give some examples which we use ourselves to make recommendations based on our own experiences. One basic issue to consider first is the folder structure and the naming convention. A good folder structure, in our view, could be like the one in Listing~\ref{lst:prj-structure} as it captures the very essence of our process: \begin{minipage}{\linewidth} \begin{lstlisting}[style=tree,caption={Project structure and naming convention for open science},label={lst:prj-structure},language=Bash] myproject/ ├── README.md ├── Makefile ├── data/ │ ├── clean_data.Rmd │ ├── clean_data.docx │ ├── data_clean/ │ │ ├── mydata.csv │ │ └── mymetadata.json │ └── data_raw/ │ ├── messy_data1.xlsx │ └── messy_data2.csv ├── analysis_plan │ ├── analysis_plan.Rnw │ └── analysis_plan.pdf ├── analysis/ │ ├── analysis.R │ └── functions/ │ └── myfunction.R ├── conference_slides.Rmd ├── conference_slides.html ├── man_references.bib ├── manuscript.Rnw └── manuscript.pdf \end{lstlisting} \end{minipage} Note that the folder structure clearly defines the different steps shown in Figure~\ref{fig:project} and the folder and file names clearly indicate what each of them contains. Regardless of the actual size of the project, the basic rule should be to apply that structure and naming convention concisely and consistently. We experienced it to also be important to keep the original data in a separate folder (\lstinline{data_raw/} in Listing~\ref{lst:prj-structure}) and to not manipulate the raw data files but to create new data files in a separate folder for the data cleaning and analysis (\lstinline{data_clean/} in Listing~\ref{lst:prj-structure}). In combination with a script which cleans the data (\lstinline{clean_data.Rmd} in Listing~\ref{lst:prj-structure}), this makes the data cleaning process reproducible to others To keep the working environment stable in terms of software versions, we decide to use a \emph{virtual machine} for this project. An alternative option could also be a container (Docker, Singularity, etc.). For the data cleaning and the analysis, we decide to use \emph{R}~\cite{rstats}, an open source software environment for statistical computing. An alternative to that could be to use Python. R scripts (e.g.~\lstinline{analysis.R} in Listing~\ref{lst:prj-structure}) are text files that can be executed in the R console. In contrast to click-and-point programs (e.g.~SPSS when used without syntax) or programs producing binary files (e.g.~Excel), R, same as Python, allows for a reproducible workflow which can be easily version controlled. For version control, in our project, we decide to use \emph{Git}~\cite{chacon2014pro} in combination with the Git-repository hosting service \emph{GitLab} (\url{https://gitlab.com}). That version control system allows us and our collaborating partners to trace the versions of all produced text documents in an organised fashion. In combination with the hosting service GitLab, these versions remain available online to all involved in our project. For automating our workflow, we use Make~\cite{stallman2001gnu}. To this end, and we keep referring to Listing~\ref{lst:prj-structure}, we store a \lstinline{Makefile} in our main project folder which contains the information on how different files depend on each other, for example that \lstinline{data/clean_data.Rmd} depends on \lstinline{data/data_raw/messy_data1.xlsx} and \lstinline{data/data_raw/messy_data2.csv} and produces \lstinline{data/data_clean/mydata.csv}, \lstinline{data/data_clean/mymetadata.json}, and \lstinline{data/clean_data.docx}. Our \lstinline{Makefile} also documents how the outputs can be produced (via bash commands). Next to using R for our project, we use \emph{R Markdown}~\cite{xie2018rmarkdown} and \emph{knitr}~\cite{xie2015dynamic}. Both allow users to combine R code chunks with explanatory text snippets and, thus, allowing for literate programming~\cite{knuth1984literate}. Our text is formatted with Markdown (R Markdown) and LaTeX (knitr). As our partners rely on MS Word, we regularly convert our R Markdown documents to Word documents for constant feedback by commenting directly in those documents. This simplifies the communication about the constant data checking and cleaning process. For an intermediate project report and later for the manuscript writing, we use knitr as it gives us more formatting options. Our analysis plan is written with knitr and we upload the PDF to the open science Framework (OSF,~\url{https://osf.io}). This allows us to use the analysis plan for preregistration of the work we aim to do. Preregistration allows to reduce biases in the process of the data analysis (see also \url{https://osf.io/prereg}). We create the slides for the conference again using R Markdown which can produce high quality HTML slides. The manuscript is written using knitr and we make it available as open access on the preprint server arXiv (\url{https://arXiv.org}). To check whether preprint sharing is within the legal constraints of the publisher of the conference, we check for it using the search engine SHERPA RoMEO (\url{http://sherpa.mimas.ac.uk/romeo}). As we see that the publisher follows a yellow open access model allowing to disclose the preprints but not the postprints, we choose to upload our preprint only. After that submission, we directly submit our manuscript to a peer-reviewed journal. Upon acceptance of the manuscript by that journal, we update our preprint with the DOI provided by the publisher, but do not submit the postprint, i.e, the post production version of the manuscript to comply with the copyright agreement. This preprint version is also the one we distribute among the community, e.g. via social media. Since all root documents are text files (except for \lstinline{data/data_raw/messy_data1.xlsx}) we can further put them under version control with Git. Through GitLab, we can make them easily accessible to others. This way, our project folder \lstinline{myproject/} can be seen as a replication package. Prior to disclosure, however, we check for parts in our data that need anonymisation to comply with the European data protection regulations (GDPR) as well as with the approval notification of the Institutional Review Board of our partners in the U.S.. We remove any data that might allow to trace observations back to individuals participating in the study. For our work to be reproducible in a long-term manner, we need to further document the versions of the software used. The virtual machine does that for us, but is not very portable. The option we follow is to use the version management system \emph{packrat} in R~\cite{ushey2018packrat}. We notice that our partners are very reluctant to share the data because of its sensitivity and because they fear misuse (e.g. when taken out of its context), thus, we would not be able to follow the FAIR principles (Sect.~\ref{ssec:whatis:open-data}) as anticipated. It is, however, possible for us to convince our project partners to disclose the data when implementing some safeguards. To this end, we decide to disclose our data using the service platform Zenodo (\url{https://zenodo.org}) while choosing \emph{Restricted Access}. Other researchers interested in accessing the data can first read the extensive meta data describing the content of the data and how it was produced. If they believe that the data would fit their scope of interest, they can apply for access and our previously established \emph{Data Use and Access Committee (DUAC)}, formed by us data owners and a member of the responsible ethics committee, so that we can decide whether to grant access to the data or not. That very example, we hope, illustrates an open science-conform study analysis and reporting producing all artefacts relevant to an open science format adoptable to software engineering and including the disclosure of: \begin{enumerate} \item A study protocol submission and review prior to publication (preregistered study) \item The replication package including all analysed data (open data) and all files, scripts, and codebooks necessary to comprehend the study (open materials) \item A preprint (yellow open access) \end{enumerate} Needless to say, the example is a simplified one neglecting some challenges we typically encounter in practice. In the following, we discuss those challenges in more detail. \section{Challenges, Pitfalls, and Guidelines} \label{sec:challenges} In the following, we discuss typical challenges and pitfalls in open science from the perspective of researchers engaging in open science. To this end, we draw from our experiences covering both the roles of researchers and the roles of organisers (handling editors and conference and workshop organisers). \subsection{General Issues} The major challenge that keeps researchers from following all the open science practices described above is probably the difficulty and effort required when making everything openly available. All the practices constitute additional steps that researchers have to do in addition to the non-open research process. They might be motivated to do these additional steps to support the scientific process and higher visibility of open publications. Yet, this motivation has limits. Therefore, the ease of doing open science practices is essential. In our experience, the difficulty of being open has reduced dramatically over the years. It is easy and cost-free to handle a research project on GitHub or OSF, to permanently publish data on Zenodo or figshare, and to provide preprints on services like arXiv. Some difficulty lies in the details, such as the LaTeX requirements of arXiv, but nowadays we mostly work with modern web applications that behave as one would expect. Another challenge that might keep researchers from employing openness in their research is the area of conflict between anonymity and confidentiality on the one side and openness on the other. In open science, we ideally would like to make everything open that helps others to understand, verify, and build on our work. When we work with companies, however, they have an understandable interest to protect their intellectual property and reputation, often reflected in signed non-disclosure agreements. Therefore, we have to reduce the data that we can make open or anonymise the data that we have. This is, again, additional effort and a risk that we accidentally make something open that should be confidential. Similarly, when our studies involve humans, they have an interest in protecting their private data. With the EU GDPR, we now also have a strong legal basis for that. Hence, again, we have the risk to violate corresponding laws. In both cases, companies and individual humans, it is therefore imperative to publish any potentially sensitive data only with the explicit consent of the study participants. Only they themselves can decide what is sensitive and critical for them. In principle, this holds for any kind of publication and, hence, only needs to be extended to ask for consent for publishing the data as well. Anonymising company names is often enough. For anonymising sensitive data of study participants, there are also established techniques (see, e.g., \cite{doi:10.1177/1468794114550439}). The challenge of anonymity also plays into the third more general issue we would like to mention: Often, openness is merely an afterthought. After we have done all the work, we provide a preprint and make the data available. Ideally, however, the whole process should be open, for example by using OSF or GitHub for all the documents, data, and analysis scripts. In terms of anonymity, this is difficult, as we cannot make everything open and often need a shadow repository with the original raw data. The raw data needs then to be carefully filtered when stored in an open repository. Yet, keeping everything open has the advantage that there is no way of manipulation during the analysis and publication phases of the research. We cannot make the hypothesis fit the data in hindsight because we documented the hypothesis before we did the analysis. \subsection{Sharing Preprints} \label{sec:SharingPreprints} For preprints, we need to consider where we want to publish the paper later on. Upon acceptance of our manuscript, we can also post a postprint. This is rarely a problem when we already have a preprint that is simply updated. Otherwise, there might be publisher-specific embargo periods that need to be adhered to. \begin{question}{Self-archiving options for Software Engineering} In principle, different publishers have different criteria about what they allow at all and what licences to choose. One helpful overview of the different self-archiving options in tune with the regulations of the major publishers in Software Engineering is, as we believe, provided by Arie van Deursen~\cite{VanDeursen2016}. \end{question} One challenge we would like to highlight in context of preprint sharing emerges from the trend in software engineering to push for double-blind reviewing models by also anonymising not only reviewers' identigies but also ones of the authors. While the higher goal to reduce potential biases is laudable, it complicated open science practices considerably. Conferences are increasingly adopting a double blind model of peer review, which does not easily allow preprints to be made available because it might allow the reviewers to find out who the authors are. It has been our effort to start a trend in conferences to allow self-archiving preprints and instruct peer reviewers to not actively look for the papers under review online, but it remains nevertheless a challenge. The picture would change if open peer review would be implemented in a code review style (as discussed in Sect.~\ref{ssec:whatis:open-peer-review}). However, the downside and fear of many researchers is that open peer review will put a lot of pressure on researchers, especially early career researchers: Both as authors -- the reviewers will know who made potential mistakes -- and as reviewers -- the authors will know who proposed the changes or even who recommended rejection of the paper. \subsection{Choosing an Appropriate Licences \label{ssec:challenges:right-license}} A common pitfall while starting to use open science practices is to assign unsuitable licences. arXiv, for example, allows to select an ad-hoc non-exclusive licence (to arXiv). Granting this minimal licence is compatible with any relevant venue a researcher might want to submit to. Hence, it keeps all options open even if the paper is rejected at the initially planned venue. Adding a Creative Commons licence could reduce this flexibility considerably. In fact, arXiv itself allows to choose from various Creative Commons licences (CC BY, CC BY-SA, CC BY-NC-SA) as well as the CC0 dedication (i.e., public domain)~\cite{arXivLicense2019}. Many argue that CC0 is preferable because it frees people from dealing with all attributions. However, in the scientific context, attributing the source and authors of all artefacts that are used is good practice independent of the licence used. PeerJ PrePrints, for instance, enforces the CC BY licence exclusively~\cite{PeerJLicense2019}. This licence is also recommendable for postprints, provided postprint sharing is compatible with the publisher copyright agreement, as it ensures that the researchers are given credit while giving others the largest amount of freedom to share and reuse the manuscript. In principle, choosing the proper licence is a non-trivial but important task, because certain licences for preprints might cause incompatibility issues further down a publishing chain. Certain licences, including some Creative Commons ones, prevent the work to be used in commercial settings (the -NC part of the CC) or require the redistribution of derivative works using the same licence (the -SA part of the CC). Traditional publishers are, most of the times, commercial entities that require either a full copyright transfer or exclusive rights to distribute the work in a more restricted way, i.e. selling access to papers through paywalls. Non-commercial and share-alike CC licences are, thus, in most of the cases incompatible with traditional publishing models. Even the more liberal CC BY licence, which only requires attribution and does not enforce a share-alike clause, might pose issues with traditional publishing as it is non-revocable and allows commercial use by anyone (i.e. non-exclusive to the publisher). The CC0 dedication has also caused issue with traditional publishing in the past~\cite{Russel2011}. The default licence by arXiv is a non-exclusive licence to distribute \cite{ArxivLicenseDistribute2019}, and, virtually, does solely allow arXiv to distribute and display a document (meaning that, theoretically, we are not allowed to do anything at all with arXiv submissions but reading them). This licence is perhaps the most restrictive one among the free licences, making it compatible with traditional publishing (if the copyright transfer conditions allow for it, see Sect.~\ref{ssec:whatis:open-access}). We can provide two recommendations. arXiv default non-exclusive licence to distribute should be used when there is certainty to publish a paper with a traditional publisher. A CC-BY licence should be used when there is certainty to publish a paper with a gold open access journal. We do not recommend licensing any preprint, postprint, or dataset using a non-commercial clause (-NC). While counter-intuitive at first sight (we wish for our work to stay free, after all), a non-commercial clause prevents the work to be used by commercial entities. The term \emph{commercial} is, from a legal perspective much broader than it might appear at first; it might affect a large spectrum of people and entities including a simple blog if the website uses an advertisement system. There exist open companies that were born from commercial entities and that are therefore not non-profit (e.g., figshare and PeerJ), and these would not be allowed to make any use of material licensed with the -NC clause. Some of the work might include data mining of papers and datasets and aggregating results, which might still be very useful for the advancement of knowledge. For more information on these legal aspects, we direct the reader to a joint group of copyright experts and Wikimedia~\cite{Wikimedia2012}. \subsection{Sharing Data and Materials} A common pitfall in publishing open data and open materials, e.g. as part of replication packages, is to use a personal or institutional website for quickly and easily making them available. It gives one a unique ID in the form of a URL. Yet, a challenge is that we cannot ensure that the URL stays valid and that the content stays on the website. As it has been empirically demonstrated, web pages disappear continuosly~\cite{Koehler2002,Koehler2003}. Therefore, repositories such as Zenodo or figshare, providing a DOI and ensuring permanent archival, are much preferable. There are small differences between the repositories, but both are recommendable. figshare is commercial but free to use, and its usability seems more polished than at Zenodo. Furthermore, figshare participates in data preservation mechanisms while Zenodo does not. The permanency of Zenodo is ensured, because it is financed by the European Union and run by CERN. Similarly as with preprint sharing in the context of double-blind reviewing models, the availability of open data and material would also reveal the authors' identity and, hence, is rendered complicated. While there is no easy solution to the problem of sharing preprints when following a double-blind reviewing model, open data repositories allow researchers now to publish data anonymously for review, thus, being compliant to restrictions imposed by such reviewing model. The authors of the data can then be made public after the paper is accepted. A set of instructions on how to share and archive open data and keep it compatible with double-blind review are presented by Graziotin~\cite{Graziotin2019}. \subsection{Preparing Qualitative Data} Achieving replicability and reproducibility of qualitative studies is particularly challenging and many might argue that it is not possible at all (see also the introductory discussion). This renders, however, the disclosure of qualitative data not less important than the disclosure of quantitative data. Even if we cannot support reproducibility of qualitative studies in the nearer sense (if interpreting those terms literally), we can at least achieve transparency of the research and support researchers not involved in the study in understanding how the researchers carrying out the study have drawn their conclusions. Qualitative data is usually the most difficult to prepare for disclosure in a replication package, because it is most personal and most difficult to anonymise within legal and ethical constraints. A number is more abstract (and easier to open) than spoken words spoken (and transcribed) by individuals, e.g., during an interview. Ideally, we anonymise also qualitative data\footnote{By anomymisation of qualitative data we refer to the removal of any information that allows to reveal the individuals' identities and / or otherwise sensitive not directly related to the study.} and publish it with the explicit consent of the participants. It is important to be open about it upfront to understand whether the participants will agree. Especially for qualitative data, it might often not be the case that we get the consent. Then, it is even more important that at least the analysis material is shared. This is typically easier to share and may include a study protocol as well as the coding schema and coding rules used when coding qualitative data (e.g. as part of a Grounded Theory study). That way, reviewers and other researchers can at least check the trustworthiness of the analysis process and understand how the authors have drawn their conclusions. \section{Conclusion} \label{sec:conclusion} Open science describes the movement to render all artefacts born out of scientific research activities accessible. Openness in our research processes is important to move forward in building reliable and robust theories, thus, turning our discipline into a more scientific one. As outlined in this chapter, we still face, however, various challenges other disciplines do not face. Despite those challenges of adapting open science to the software engineering context, we can still see that our research community is making great progress in that direction. We have ourselves either accompanied or fully implemented efforts to help the community opening up their research artefacts. In the course of our endeavour, we have noticed very well that introducing open science into a research community is a difficult and sensitive task, because open science is still often confronted with prejudice, but also because many authors, despite their willingness to conform to such policies, do not often know how exactly to follow such an initiative; that is to say, it is often difficult to see what we should do and what we can do (also considering ethical and legal constraints). This is also the reason why we, as organisers, are often constraint by a general reluctance of implementing mandatory open science principles (e.g. via open data policies), thus, rendering the transition to more openness in our discipline rugged. However, the implementations of open science policies in recent editions of conferences and journals -- even if non-mandatory ones were authors could participate on a voluntary basis with the support of dedicated open science chairs -- nevertheless showed high participation ratios with more than 50\% of the authors disclosing their data. Such a support by the community and the positive feedback, e.g. in Town Hall meetings, strengthen our confidence in that the research community is showing more and more awareness of the importance of open science and that open science will eventually become the norm. One hope we associate with our ongoing efforts in implementing open science initiatives in software engineering venues is to send strong signals into the research community and to gradually increase the awareness of participating researchers to move further in that direction. Arguably, we are still confronted with various challenges, such as: \begin{itemize} \item How to implement a uniform and transparent guideline to review disclosed artefacts covering all possible variations in the different types of study (e.g. quantitative and qualitative ones)? \item How to implement preregistered studies (which we consider especially important to tackle the problems of publication bias or p-hacking) in tune with the reviewing processes of our existing journals and conferences and how to re-define existing roles and responsibilities? \item How to properly reward authors with a clear and easy to understand (and to use) badge system which recognises the differences in the various study types and the difficulties in opening up sensitive, e.g. industrial, data? \item How to implement open peer reviews? We can nowadays observe a significant turn in the existing single-blinded reviewing regime, which we applaud, but instead of opening up reviews as well, the current trend is towards even more closeness via double-blind reviewing models, thus, rendering other open science activities difficult, too. \end{itemize} We are still convinced that it is not anymore a question whether open science will become the norm also for the software engineering research community, but we recognise that there is still a long way to go, also because we still need to increase the awareness for what open science is, why it is so important, and how to properly adopt such principles to software engineering. The chapter at hands is intended to address these questions and to contribute to the movement. Our hope is to further encourage all members of our research community in joining us in this important endeavour of actively shaping an open science agenda for the software engineering community. \begin{acknowledgement} We want to thank all members of the empirical software engineering research community actively supporting the open science movement and its adoption to the software engineering community. Just to name a few: Robert Feldt and Tom Zimmermann, editors in chief of the Empirical Software Engineering Journal, are committed to support the implementation of a new Reproducibility \& Open Science initiative\footnote{See also \url{https://github.com/emsejournal/openscience}} -- the first one to implement an open data initiative following a holistic process including a badge system. The steering committee of the International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE) supported the implementation of a open science initiative from 2016 on. Markku Oivo, general chair of the International Symposium on Empirical Software Engineering and Measurement (ESEM) 2018, has actively supported the adoption of the CHASE open science initiative with focus on data sharing for the major Empirical Software Engineering conference so that we could pave the road for a long-term change in that community. Sebastian Uchitel, general chair of the International Software Engineering Conference (ICSE) 2017, further supported an initiative to foster sharing of preprints, and Natalia Juristo, general chair of ICSE 2021, further actively supports the adoption of the broader ESEM open science initiative to our major general software engineering conference. Finally, we want to thank Per Runeson, Klaas-Jan Stol, and Breno de Fran\c{c}a for their elaborate comments on earlier versions on this manuscript. \end{acknowledgement} \bibliographystyle{plain}
{ "timestamp": "2019-08-14T02:14:58", "yymm": "1904", "arxiv_id": "1904.06499", "language": "en", "url": "https://arxiv.org/abs/1904.06499" }
\section{Introduction} \label{sec:intro} In many image processing applications, using local information combined with the knowledge of long-range spatial arrangement is crucial. The spatial redundancy on sub-images called patches, encodes the small scale structure of the image as well as its large scale organization. More precisely, local information is encoded in the patch content and the large scale organization is contained in the redundancy of this information across the patches of the image. For example, patch-based inpainting techniques, such as \cite{criminisi2004region, he2014image}, assign patches of a known region to patches of an unknown region. Namely, each patch position on the border of the unknown region is associated to an offset corresponding to the best patch according to the partial available information. In \cite{he2014image} the authors replace the search on the whole image by a search among the most redundant offsets in the known region. This allows the authors of \cite{he2014image} to retrieve long-range spatial structure in the unknown part of the image. Another famous application of spatial redundancy can be found in denoising, with the seminal work (Non-Local means) of Buades and coauthors \cite{buades2005non}, in which the authors propose to replace a noisy patch by the mean over all spatially redundant patches. Last but not least, spatial redundancy is of crucial importance in exemplar-based texture synthesis. In this paper we define textures as images containing repeated patterns but also reflecting randomness in the arrangement of these patterns. Among textures, one important class is given by the microtextures in which no individual object can be clearly delimited. In the periodic case, a more precise definition will be given in Definition \ref{def:microtexture}. These microtexture models can be described by Gaussian random fields \cite{van1991spot, galerne2011random, leclaire2015random, xia2014synthesizing}. Parametric models using features such as wavelet transform coefficients \cite{portilla2000parametric}, scattering transform coefficients \cite{sifre2013rotation} or convolutional neural network outputs \cite{gatys2015texture} have been proposed in order to derive image models with more structur . On the other hand, non-parametric patch-based algorithms such as \cite{efros1999texture,efros2001image,kwatra2003graphcut, raad2015conditional, galerne2018texture} propose to use most similar patches in order to fill the new texture images, similarly to inpainting techniques. All these techniques lift images in spaces with dimensions higher than the original image spac , and make use of the redundancy of the lifting to extract important structural information. There exist two main types of lifting: feature extraction or patch extraction. Feature extraction relies on the use of filters, linear or non-linear, which aim at selecting substantial local information. Among popular kernels are oriented and multiscale filters, which happened to be identified as early processing in mammal vision systems \cite{daugman1985uncertainty, hubel1959receptive}. These last years have seen the rise of neural networks in which the filter dictionary is no longer given as an input but learned through a data-driven optimization procedure \cite{simonyan2014vgg}. On the other hand, patch-based methods rely on the assumption that image processing tasks are simplified when conducted in the higher dimensional patch space. Every analysis performed in a lifted space, built via feature extraction or patch extraction, relies on the comparison of points in this space. In patch-based lifted spaces, we aim at finding dissimilarity functions such that two patches are visually close if the dissimilarity measurement between them is small. In this paper we focus on the square Euclidean distance but other choices could be considered \cite{wang2003multiscale,wang2004image,debortoli2018gaussian,deledalle2012compare}. This leads us to consider a statistical hypothesis testing framework to assess similarity (or dissimilarity) between patches. The null hypothesis is defined as the absence of local structural similarities in the image. Reciprocally the alternative hypothesis is defined as the presence of such similarities. There exists a wide variety of tractable models exhibiting no similarity at long-range, like Gaussian random fields \cite{van1991spot, galerne2011random, leclaire2015random, xia2014synthesizing} or spatial Markov random fields \cite{cross1983markov}, whereas sampling and inference in very structured models rely on optimization procedures and may be computationally expensive, their distribution being the limit of some Markov chain \cite{zhu1998filters,lu2015learning} or some stochastic optimization procedure \cite{bruna2018multiscale}. This encourages us to consider an \textit{a~contrario} \ approach, \textit{i.e.} \ we do not consider the alternative hypothesis and focus on rejecting the null hypothesis. This framework was successfully applied in many areas of image processing \cite{davy2018reducing, desolneux2000meaningful, desolneux2001edge, almansa2003vanishing, cao2004application} and aims at identifying structure events in images. This statistical model takes its roots in the fundamental work of the Gestalt theory \cite{desolneux2007gestalt}. One of its principle, the non-accidentalness principle \cite{lowe2012perceptual} or Helmholtz principle \cite{zhu1999embedding, desolneux2001edge}, states that no structure is perceived in a noise model. To be precise, in our case of interest, we want to assess that no spatial redundancy is perceived in microtexture models. This methodology allows us to only design a locally structured background model to define a null hypothesis. Combining \textit{a~contrario} \ principles and patch-based measures, we propose an algorithm to identify auto-similarities in images. We then turn to the implementation of such an algorithm and illustrate the diversity of its possible applications with three examples: denoising, lattice extraction, and periodicity ranking of textures. In our denoising application we propose a modification of the celebrated Non-Local means algorithm \cite{buades2005non} (NL-means) by inserting a threshold in the selection of similar patches. Using an \textit{a~contrario} \ model we are able to give probabilistic control on the patch reconstruction. We then focus on periodicity detection and, more precisely, lattice extraction. Periodicity in images was described as an important feature in early mathematical vision~\cite{haralick1973textural}. Most of the proposed methods to analyze periodicity rely on global measurements such as the modulus of the Fourier transform \cite{matsuyama1983structural} or the autocorrelation \cite{lin1997extracting}. These global techniques are widely used in crystallography where lattice properties, such as the angle between basis vectors, are fundamental \cite{mevenkamp2015unsupervised, sang2014revolving}. Since all of our measurements are local, we are able to identify periodic similarities even in images which are not periodic but present periodic parts, for instance if two crystal structures are present in a single crystallography image. We draw a link between the introduced notion of auto-similarity and the inertia measurement in co-occurence matrices \cite{haralick1973textural}. We then introduce our lattice proposal algorithm which combines a detection map, \textit{i.e.} \ the output of our redundancy detection algorithm, and graphical model techniques, as in \cite{park2009deformed}, in order to extract lattice basis vectors. Our last application concerns texture ranking. Since the definition of texture is broad and covers a wide range of images, it is a natural question to identify criteria in order to distinguish textures. In \cite{liu2004computational}, the authors use a classical measure for distinguishing textures:~regularity. In this work, we narrow this criterion and restrict ourselves to the study of periodicity in texture images. The proposed graphical model inference naturally gives a quantitative measurement for texture periodicity ranking. We give an example of ranking on 25 images of the Brodatz set. Our paper is organized as follows. An \textit{a~contrario} \ framework for local similarity detection is proposed in Section \ref{sec:a_contrario_framework}. In the \textit{a~contrario} \ framework, a background model, corresponding to the null hypothesis, is required. The consequence of choosing Gaussian models as background models is investigated and a redundancy detection algorithm is proposed in Section \ref{sec:gauss-model-cons}. The rest of the paper is dedicated to some examples of application of the proposed framework. After reviewing one of the most popular method in image denoising we introduce a denoising algorithm in Section~\ref{sec:nl-means-contrario-1} and present our experimental results in Section \ref{sec:expe-results}. Local dissimilarity measurements can be used as periodicity detectors. The link between the locality of the introduced functions and the literature on periodicity detection problems is investigated in Section \ref{sec:existing_algorithms}. An algorithm for detecting lattices in images is given in Section \ref{sec:algorithm and properites} and numerical results are presented in Section \ref{sec:experimental-results}. In our last experiment in Section \ref{sec:texture-rank}, we introduce a criterion for measuring texture periodicity. We conclude our study and discuss future work in Section \ref{sec:conclusion}. \section{An a contrario framework for auto-similarity} \label{sec:similarity functions} We first introduce a notion of dissimilarity between patches of an input image. \begin{mydef}[Auto-similarity] Let $u$ be an image defined over a domain $\Omega = \llbracket 0,M-1 \rrbracket^2 \subset \mathbb{Z}^2$, with $M \in \mathbb{N} \backslash \{ 0\}$. Let $\omega \subset \mathbb{Z}^2$ be a patch domain. We introduce ${P_{\omega}(u)= (\dot{u}(\veclet{y}))_{\veclet{y} \in \omega}}$ the patch at position $\omega$ in the periodic extension of $u$ to $\mathbb{Z}^2$, denoted by~$\dot{u}$. We define the auto-similarity with patch domain $\omega$ and offset $\veclet{t}\in \mathbb{Z}^2$ by \begin{equation} \mathcal{AS}(u,\veclet{t},\omega) = \norm{P_{\veclet{t}+\omega}(u) - P_{\omega}(u)}_2^2 \; . \end{equation} \label{def:autosim} \end{mydef} The auto-similarity computes the distance between a patch of $u$ defined on a domain $\omega$ and the patch of $u$ defined by the domain $\omega$ shifted by the offset vector $\veclet{t}$ In what follows, we introduce an \textit{a~contrario} \ framework on the auto-similarit . This framework will allow us to derive an algorithm for detecting spatial redundancy in natural images. \label{sec:a_contrario_framework} In this section we fix an image domain $\Omega \subset \mathbb{Z}^2$ and a patch domain $\omega \subset \Omega$. We recall that our final aim is to design a criterion that will answer the following question: are two given patches similar? This criterion will be given by the comparison between the value of a dissimilarity function and a threshold $a$. We will define the threshold $a$ so that few similarities are identified in the null hypothesis model, \textit{i.e.} \ similarity does not occur ``just by chance''. Thus we can reformulate the initial question: is the similarity output of a dissimilarity function between two patches small enough? Or, to be more precise, how can we set the threshold $a$ in order to obtain a criterion for assessing similarity between patches? This formulation agrees with the \textit{a~contrario} \ framework \cite{desolneux2007gestalt} which states that geometrical and/or perceptual structure in an image is meaningful if it is a rare event in a background model. This general principle is sometimes called the Helmholtz principle \cite{zhu1999embedding} or the non-accidentalness principle \cite{lowe2012perceptual}. Therefore, in order to control the number of similarities identified in the background model, we study the probability density function of the auto-similarity function with input random image $U$ over $\Omega$. We will denote by $\mathbb{P}_0$ the probability distribution of $U$ over $\mathbb{R}^{\Omega}$, the images over $\Omega$. We will assume that $\mathbb{P}_0$ is a microtexture model, see Definition~\ref{def:microtexture} below for a precise definition of such a model. We define the following significant event which encodes spatial redundancy: $\mathcal{AS}(u,\veclet{t},\omega) \leq a(\veclet{t})$, where $a$, the threshold function, is defined over the offsets ($\veclet{t} \in \mathbb{Z}^2$) but also depends on other parameters such as $\omega$ or $\mathbb{P}_0$ . The dependency of $a$ with respect to $\veclet{t}$ cannot be omitted. For instance, even in a Gaussian white noise $W$, the probability distribution function of $\mathcal{AS}(W, \veclet{t}, \omega)$ depends on $\veclet{t}$. The Number of False Alarms ($\operatorname{NFA}$ ) is a crucial quantity in the \textit{a~contrario} \ methodology. A false alarm is defined as an occurrence of the significant event in the background model~$\mathbb{P}_0$. We recall that in our model the significant event is patch redundancy. This test must be conducted for every possible configurations of the significant event, \textit{i.e.} \ in our case we test every possible offset $\veclet{t}$. The $\operatorname{NFA}$ \ is then defined as the expectation of the number of false alarms over all possible configuration . Bounding the $\operatorname{NFA}$ \ ensures that the probability of identifying $k$ offsets with spatial redundancy is also bounded, see Proposition \ref{prop:a_contrario_bound}. In what follows we give the definition of the $\operatorname{NFA}$ \ in the spatial redundancy context. \begin{mydef}[$\operatorname{NFA}$] Let $U \sim \mathbb{P}_0$, where $\mathbb{P}_0$ is a background microtexture model. We define the auto-similarity probability map $\mathsf{AP}$ for any $\veclet{t} \in \Omega$, $\omega \subset \Omega$ and $a \in \mathbb{R}^{\Omega}$ by \begin{equation}\mathsf{AP}(\veclet{t},\omega, a) = \prob[0]{ \mathcal{AS}(U,\veclet{t},\omega) \leq a(\veclet{t})} \label{eq:def_autoprob} \; .\end{equation} We define the auto-similarity expected number of false alarms $\mathsf{ANFA}$ by \begin{equation} \label{eq:NFA} \mathsf{ANFA}(\omega, a) = \sum_{\veclet{t} \in \Omega} \mathsf{AP}(\veclet{t}, \omega, a) \; \end{equation} \label{def:NFA} \end{mydef} Note that $\mathsf{AP}(\veclet{t}, \omega, a)$ corresponds to the probability that $\omega + \veclet{t}$ is similar to $\omega$ in the background model $U$. For any $\veclet{t} \in \Omega$, the cumulative distribution function of the auto-similarity random variable $\mathcal{AS}(U,\veclet{t},\omega)$ under $\mathbb{P}_0$ evaluated at value $\alpha(\veclet{t})$ is given by $\mathsf{AP}(\veclet{t},\omega,\alpha(\veclet{t}))$. We denote by ${q \mapsto \mathsf{AP}^{-1}(\veclet{t},\omega,q)}$ the inverse cumulative distribution function, potentially defined by a generalized inverse ($ \mathsf{AP}^{-1}(\veclet{t},\omega,q) = \inf \{\alpha(\veclet{t}) \in \mathbb{R}, \ \mathsf{AP}(\veclet{t}, \omega, \alpha(\veclet{t})) \geq q \}$), of the auto-similarity random variable for a fixed offset $\veclet{t}$, with $q \in (0,1)$ a quantile. We now have all the tools to control the number of detected offsets in the background model. \begin{mydef}[Detected offset] Let $u \in \mathbb{R}^{\Omega}$ be an image, $\omega \subset \Omega$ a patch domain, and $a \in \mathbb{R}^{\Omega}$. An offset $\veclet{t}$ is said to be detected with respect to $a$, if $\mathcal{AS}(u,\veclet{t}, \omega) \leq a(\veclet{t})$. \label{def:detec_offset} \end{mydef} Note that a detected offset in $U \sim \mathbb{P}_0$ corresponds to a false alarm in the \textit{a~contrario} \ model. In what follows we suppose that the cumulative distribution function of $\mathcal{AS}(U,\veclet{t}, \omega)$ is invertible for every $\veclet{t} \in \Omega$. This ensures that for any $\veclet{t} \in \Omega$ and $q \in (0,1)$ we have \begin{equation} \label{eq:invertibility} \mathsf{AP}\left(\veclet{t}, \omega, \mathsf{AP}^{-1}\left(\veclet{t},\omega, q\right)\right) = q \; . \end{equation} \begin{prop} \label{prop:a_contrario_bound} Let $\operatorname{NFA}_{\text{max}} \geq 0$ and for all $\veclet{t} \in \Omega$ define $ a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, \operatorname{NFA}_{\text{max}} / |\Omega|\right)$. We have that for any $n \in \mathbb{N} \without{0}$, \begin{equation*} \mathsf{ANFA}(\omega, a) = \operatorname{NFA}_{\text{max}} \quad \text{and} \quad \prob[0]{ \text{\quotem{at least $n$ offsets are detected in $U$}}} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \;. \end{equation*} \end{prop} \begin{proof} Using \eqref{eq:NFA}, and $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, \operatorname{NFA}_{\text{max}} / |\Omega|\right)$, we get \[ \mathsf{ANFA}(\omega, a) = \summ{\veclet{t} \in \Omega}{}{\mathsf{AP}(\veclet{t},\omega, a)} = \summ{\veclet{t} \in \Omega}{}{\mathsf{AP}\left(\veclet{t}, \omega, \mathsf{AP}^{-1}\left(\veclet{t},\omega, \operatorname{NFA}_{\text{max}} / \vertt{\Omega}\right)\right)} = \operatorname{NFA}_{\text{max}} \; , \] where the last equality is obtained using \eqref{eq:invertibility}. Concerning the upper-bound, we have, using the Markov inequality and \eqref{eq:def_autoprob}, for any $n \in \mathbb{N} \without{0}$ \begin{align*} \prob[0]{ \text{\quotem{\small at least $n$ offsets are detected in $U$}}} &= \prob[0]{\sum_{\veclet{t} \in \Omega}{}{\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})}} \ge n} \\ &\leq \frac{\sum_{\veclet{t} \in \Omega}{}{\expec{\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})}}}}{n} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \; , \end{align*} where $\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})} = 1$ if $\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})$ and $0$ otherwise. \end{proof} Thus, setting $a$ as in Proposition \ref{prop:a_contrario_bound}, we have that an offset $\veclet{t} \in \Omega$ is detected for an image~$u \in \mathbb{R}^{\Omega}$ i \begin{equation}\mathcal{AS}(u,\veclet{t},\omega) \leq \mathsf{AP}^{-1}\left(\veclet{t},\omega, \operatorname{NFA}_{\text{max}} / \vertt{\Omega}\right) \; . \label{eq:icdf_ineq}\end{equation} This \textit{a~contrario} \ detection framework can then be simply rewritten as 1) computing the auto-similarity function with input image $u$, 2) thresholding the obtained dissimilarity map with the inverse cumulative distribution function of the computed dissimilarity function under $\mathbb{P}_0$. The computed threshold depends on the offset and Proposition \ref{prop:a_contrario_bound} ensures probabilistic guarantees on the expected number of detections under $\mathbb{P}_0$. Using the inverse property of the inverse cumulative distribution function and \eqref{eq:icdf_ineq}, we obtain that an offset is detected if and only if \begin{equation}\prob[0]{\mathcal{AS}(U,\veclet{t},\omega) \leq \mathcal{AS}(u,\veclet{t},\omega)}= \mathsf{AP}\left(\veclet{t}, \omega, \mathcal{AS}(u,\veclet{t},\omega)\right) \leq \operatorname{NFA}_{\text{max}} /\vertt{\Omega} \; . \label{eq:true_detec}\end{equation} Therefore, the thresholding operation can be conducted either on $\mathcal{AS}(u,\veclet{t}, \omega)$, see \eqref{eq:icdf_ineq}, or on $\mathsf{AP}\left(\veclet{t}, \omega, \mathcal{AS}(u,\veclet{t},\omega)\right)$, see \eqref{eq:true_detec}. This property will be used in Section \ref{sec:detection-algorithm} to define a similarity detection algorithm based on the evaluation of $\mathcal{AS}(u,\veclet{t}, \omega)$ \section{Gaussian model and detection algorithm} \label{sec:gauss-model-cons} \subsection{Choice of background model} \label{sec:choice-backgr-model} In this section we compute $\mathsf{AP} \left( \veclet{t}, \omega, \alpha \right)$, \textit{i.e.} \ the cumulative distribution function of the similarity function under the null hypothesis model, with a Gaussian background model. Indeed, if the background model is simply a Gaussian white noise the similarities identified by the \textit{a~contrario} \ algorithm are the ones that are not likely to be present in the Gaussian white noise image model. More generally, we consider stationary Gaussian random fields defined in the following way: we introduce an image $f$ over $\mathbb{R}^{\Omega}$ which contains the microtexture information we want to discard in our \textit{a~contrario} \ model. In what follows we give the definition of the microtexture model associated to $f$. \begin{mydef}[Microtexture model] \label{def:microtexture} Let $f \in \mathbb{R}^{\Omega}$, we define the associated microtexture model $U$ by setting, $U = f * W$, where $*$ is the periodic convolution operator over $\Omega$ given by $v * w(\veclet{x}) = \sum_{\veclet{y} \in \Omega} \dot{v}(\veclet{y}) \dot{w}(\veclet{x} - \veclet{y})$ and $W$ is a white noise over $\Omega$, \textit{i.e.} \ $(W(\veclet{x}))_{\veclet{x} \in \Omega}$ are i.i.d. $\mathcal{N}(0,1)$ random variables. \end{mydef} Given an image $u \in \mathbb{R}^{\Omega}$, a microtexture model can be derived considering \begin{equation}m_u = \sum_{\veclet{x} \in \Omega} u(x)/|\Omega| \; , \quad \text{and} \quad U = |\Omega|^{-1/2} ( u - m_u)* W \; . \label{eq:gaussian_model}\end{equation} Note that if $U$ is given by \eqref{eq:gaussian_model} we have for any $\veclet{x}, \veclet{y} \in \Omega$ \begin{equation} \expec{U(\veclet{x})} = 0\quad \text{and} \ \cov{U(\veclet{x}), U(\veclet{y})} = |\Omega|^{-1} \sum_{\veclet{z} \in \Omega}(\dot{u}(\veclet{z}) - m_u)(\dot{u}(\veclet{z - (y-x)}) - m_u) \; . \end{equation} We refer to \cite{galerne2011random} for a mathematical study of this model. \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=.3\linewidth]{./img/im_ori.jpg}}\hfill \subfloat[]{\includegraphics[width=.3\linewidth]{./img/white_noise.jpg}}\hfill \subfloat[]{\includegraphics[width=.3\linewidth]{./img/gaussian.jpg}}\hfill \caption{\figuretitle{Examples of microtexture models} In~(a) we present an original $256 \times 256$ image. In~(b) and~(c) we derive two microtexture models. In~(b) we present a Gaussian white noise and in~(c) the microtexture model given by \eqref{eq:gaussian_model}. Note that (c) shows more local structure than (b).} \label{fig:illus_dmap} \end{figure} \subsection{Detection algorithm} \label{sec:detection-algorithm} In this section, $\Omega$ is a finite square domain in $\mathbb{Z}^2$. We fix $\omega \subset \Omega$. We also define $f$, a function over $\Omega$. We consider the Gaussian random field $U = f * W$, where $W$ is a Gaussian white noise over $\Omega$. We denote by $\Gamma_f$ the autocorrelation of $f$, \textit{i.e.} \ $\Gamma_f = f * \check{f}$ where for any $\veclet{x} \in \Omega$, $\check{f}(\veclet{x}) = f(-\veclet{x})$. We introduce the offset correlation function~$\Delta_f$ defined for any $\veclet{t}, \veclet{x} \in \Omega$ by \begin{equation} \label{eq:delta_fun} \Delta_f(\veclet{t}, \veclet{x}) = 2\Gamma_f(\veclet{x}) - \Gamma_f(\veclet{x+t}) - \Gamma_f(\veclet{x-t}) \; . \end{equation} The following proposition, proved in \cite{debortoli2018gaussian}, gives the explicit probability distribution function of the squared $\ell^2$ auto-similarity. \begin{prop}[Squared $\ell^2$ auto-similarity function exact probability distribution function] Let $\Omega = \llbracket 0, M-1 \rrbracket^2$ with $M \in \mathbb{N} \backslash \{ 0 \}$, $\omega \subset \Omega$, $f \in \mathbb{R}^{\Omega}$ and $U = f* W$ where $W$ is a Gaussian white noise over $\Omega$. Then, for any $\veclet{t} \in \Omega$, $\mathcal{AS}(U,\veclet{t},\omega)$ has the same distribution as $\sum_{k=0}^{|\omega| - 1}{\lambda_k(\veclet{t},\omega)Z_k}$, with $Z_k$ independent chi-square random variables with parameter 1 and $\lambda_k(\veclet{t},\omega)$ the eigenvalues of the covariance matrix $C_{\veclet{t}}$ associated with function $\Delta_f(\veclet{t},\cdot)$ restricted to $\omega$, defined in \eqref{eq:delta_fun}, i.e \ for any $\veclet{x_1}, \veclet{x_2} \in \omega$, $C_{\veclet{t}}(\veclet{x_1, x_2}) = \Delta_f(\veclet{t}, \veclet{x_1 - x_2})$. \label{prop:squared_exact} \end{prop} As a consequence if $f =\delta_0$, \textit{i.e.} \ $U$ is a Gaussian white noise, and $\{ \veclet{x} + \veclet{t}, \veclet{x} \in \omega\} \cap \omega = \emptyset$, \textit{i.e.} \ there is no overlapping between the patch domain $\omega$ and its shifted version, then $\mathcal{AS}(U,\veclet{t},\omega)$ is a chi-square random variable with parameter $|\omega|$. In order to compute the cumulative distribution function of a quadratic form of Gaussian random variables we must deal with two issues: 1) the computation of the eigenvalues $\lambda_k(\veclet{t}, \omega)$ might be time-consuming and efficient methods must be developed ; 2) the exact computation of the cumulative distribution function of a quadratic form of Gaussian random variables requires the use of heavy integrals, see \cite{imhof1961computing}. In \cite{debortoli2018gaussian} a projection method is introduced in order to easily compute approximated eigenvalues, with equality when $\omega = \Omega$. The so-called Wood F method (see \cite{wood1989f, bodenham2016comparison}) shows the best trade-off between accuracy and computational cost to approximate the cumulative distribution function of quadratic forms in Gaussian random variables with given weights. It is a moment method of order 3, fitting a Fisher-Snedecor distribution to the empirical one. Note that in \cite{liu2009chisquare} another moment method of order 3 is proposed. In what follows, we assume that we can compute the cumulative distribution function of $\mathcal{AS}(U,\veclet{t},\omega)$ and we refer to \cite{debortoli2018gaussian} for further details. In Algorithm \ref{alg:auto-similaritydetection} we propose an \textit{a~contrario} \ framework for spatial redundancy detection. We suppose that $u$ and $\omega$ are provided by the user. Using Proposition \ref{prop:a_contrario_bound} and \eqref{eq:true_detec} , we say that an offset is detected if $\mathsf{AP}\left(\veclet{t}, \omega, \mathcal{AS}(u,\veclet{t},\omega)\right) \leq \operatorname{NFA}_{\text{max}} /\vertt{\Omega}$. The value $\operatorname{NFA}_{\text{max}}$ \ is supposed to be set by the user. The background model used in the auto-similarity detection is the one given in \eqref{eq:gaussian_model}. Therefore, Proposition \ref{prop:squared_exact} and the discussion that follows can be used to compute an approximation of $\mathsf{AP}(\veclet{t}, \omega, \mathcal{AS}(u, \veclet{t},\omega))$. In Figure \ref{fig:illus_dmap} we apply Algorithm \ref{alg:auto-similaritydetection} to a texture image. \begin{algorithm} \caption{Auto-similarity detection \label{alg:auto-similaritydetection}} \begin{algorithmic}[1] \Function{autosim-detection}{$u$, $\omega$, $\operatorname{NFA}_{\text{max}}$} \For{$\veclet{t} \in \Omega$} \Let{val}{$\mathcal{AS}(u,\veclet{t},\omega)$} \Let{$P_{map}(\veclet{t})$}{$\mathsf{AP}(\veclet{t}, \omega, \text{val})$} \Comment{$\mathsf{AP}(\veclet{t}, \omega, \text{val})$ approximation detailed in Section \ref{sec:detection-algorithm}} \Let{$D_{map}(\veclet{t})$}{$\mathbb{1}_{P_{map}(\veclet{t}) \leq \text{$\operatorname{NFA}_{\text{max}}$} / |\Omega |}$} \EndFor \State \Return{the images $P_{map}$, $D_{map}$} \EndFunction \end{algorithmic} \end{algorithm} \begin{figure} \centering \subfloat[]{\includegraphics[width=.19\linewidth]{./img/bw/img_01.jpg}}\hfill \subfloat[]{\includegraphics[width=.19\linewidth]{./img/gaussian_1.jpg}}\hfill \subfloat[]{\includegraphics[width=.19\linewidth]{./img/out_01.jpg}}\hfill \subfloat[]{\includegraphics[width=.19\linewidth]{./img/Pmap_01.jpg}}\hfill \subfloat[]{\includegraphics[width=.19\linewidth]{./img/Dmap_01.jpg}}\hfill \caption{\figuretitle{Outputs of Algorithm \ref{alg:auto-similaritydetection}} In~(a) we present an original $256 \times 256$ image. In~(b) we present the associated microtexture model given by \eqref{eq:gaussian_model}. In~(c) the green patch is the input patch, \textit{i.e.} \ $P_{\omega}(u)$. In this experiment $\operatorname{NFA}_{\text{max}}$ \ is set to $1$. In~(d), respectively (e), we present the output $P_{map}$, respectively $D_{map}$, of Algorithm \ref{alg:auto-similaritydetection}. In~(c) we show in red the patches corresponding to the identified offsets in $P_{map}$.} \label{fig:illus_dmap} \end{figure} \section{Denoising} \label{sec:denoising} \subsection{NL-means and a contrario framework} \label{sec:nl-means-contrario-1} In this section we apply the \textit{a~contrario} \ framework to the context of image denoising and propose a simple modification of the celebrated image denoising algorithm Non-Local Means (NL-means). This algorithm was introduced in the seminal paper of Buades et al. \cite{buades2005non} and was inspired by the work of Efros and Leung in texture synthesis \cite{efros1999texture}. It was also independently introduced in \cite{awate2006unsupervised}. This algorithm relies on the simple idea that denoising operations can be conducted in the lifted patch space. In this space the usual Euclidean distance acts as a good similarity detector and we can obtain a denoised patch by averaging all the patches with weights that depend on this Euclidean distance. Usually the weight function is set to have exponential decay, but it was suggested in \cite{goossens2008improved, salmon2010two, duval2010parameter} to use compactly supported weight functions in order to avoid the loss of isolated details. Since its introduction, many algorithms derived from NL-means have been proposed in order to embed the algorithm in general statistical frameworks \cite{duval2011bias, lebrun2013nonlocal} or to take into account the underlying geometry of the patch space \cite{houdard2017high}. Among the state-of-the-art denoising algorithms, see \cite{lebrun2012secrets} for a review, we consider Block-Matching and 3D Filtering (BM3D) \cite{dabov2007image} to compare our algorithm with. There exist several works combining \textit{a~contrario} \ models and denoising tasks. Coupier et al. in \cite{coupier2005image} propose to combine morphological filters and a testing hypothesis framework to remove impulse noise. In \cite{delon2013patch} Delon and Desolneux compare different statistical frameworks to perform denoising with Gaussian noise or impulse noise. The \textit{a~contrario} \ model was also successfully used to deal with speckle noise \cite{fablet2005speckle} and quasi-periodic noise \cite{sur2015contrario}, and rely on the thresholding of wavelet or Fourier coefficients. In \cite{kervrann2008local}, Kervrann and Boulanger derive approximated probabilistic thresholds using $\chi_2$ probability distribution functions. In \cite{wue2013probabilistic} the authors propose a testing framework in order to estimate thresholds. The expressions they derive also relies on an approximation of the probability distribution of the squared Euclidean norm between two patches in Gaussian white noise. Following a standard extension procedure of the NL-means algorithm we consider a threshold version of it, see Algorithm \ref{alg:nlmeans_thres}. In what follows we fix a ``clean'', or original, image $u_0$ defined over $\Omega$, a finite rectangular domain of $\mathbb{Z}^2$, a noisy image $u = u_0 + \sigma w$, with $w$ a realization of a standard Gaussian random field $W$ and $\sigma >0$ the standard deviation of the noise. In all of our experiments we suppose that $\sigma$ is known. Note that there exist several algorithms to estimate $\sigma$ from real images, see \cite{ponomarenko2007noise} for instance. Our goal is to retrieve $u_0$ based on the information in $u$. We consider the lifted version of $u$ in a patch space. Let $\omega_0$ be a centered $8 \times 8$ patch domain. For a patch window $\omega = \veclet{x} + \omega_0$ the patch search window $T$ will be defined by \begin{equation}T = \left\lbrace \veclet{t} \in \mathbb{Z}^2, \ \veclet{t} + \omega \subset \Omega, \ \| \veclet{t} \|_{\infty} \leq c \right\rbrace \; , \label{eq:patch_search_window} \end{equation} with $c \in \mathbb{N}$. $|T|$ denotes the cardinality of $T$. There exists a large literature concerning the setting of $c$ and $\omega_0$, see \cite{duval2010parameter}. Note that the locality of the patch window was assessed to be a crucial feature of NL-means \cite{grewenig2011rotationally}. Suppose we have a collection of denoised patches $\hat{p}(u, \omega)$ for all patch domains $\omega$, we obtain a pixel at position $\veclet{x}$ in the denoised image $\hat{u}$ using the following average, see \cite{buades2011non}, \begin{equation} \hat{u}(\veclet{x}) = \left| \lbrace \veclet{t} \in \Omega, \ \text{s.t} \ \veclet{x} \in \veclet{t} + \omega \subset \Omega \rbrace \right|^{-1} \sum_{\veclet{t} \in \Omega, \ \text{s.t} \ \veclet{x} \in \veclet{t} + \omega \subset \Omega} \hat{p}(u,\veclet{t} + \omega)(\veclet{x}) \; . \label{eq:mean_denoising} \end{equation} We now introduce our modification of NL-means. We suppose that we are provided a threshold function $a$. The choice of such a function is discussed in Proposition \ref{prop:a_contrario_bound_nlmeans}. \begin{algorithm} \caption{NL-means threshold \label{alg:nlmeans_thres}} \begin{algorithmic}[1] \Function{NL-means-threshold}{$u$, $\sigma$, $\omega_0$, $c$, $a$} \For{$\veclet{x} \in \mathbb{Z}^2, \ \veclet{x} + \omega_0 \subset \Omega$} \Let{$\omega$}{$\veclet{x} + \omega_0$} \Let{$T$}{defined by \eqref{eq:patch_search_window}} \Let{$N_{\omega}(u)$}{0} \Let{$\hat{p}(u, \omega)$}{0} \For{$\veclet{t} \in T$} \If{$\mathcal{AS}(u,\veclet{t},\omega) \leq \sigma^2 a(\veclet{t})$} \Comment{always true for $\veclet{t} =0$} \Let{$\hat{p}(u, \omega)$}{$\frac{N_{\omega}(u)}{N_{\omega}(u) + 1} \hat{p}(u,\omega) + \frac{1}{N_{\omega}(u)+1} P_{\veclet{t} + \omega}(u)$ \Comment{$P_{\omega}(u)$ is given in Definition \ref{def:autosim}} } \Let{$N_{\omega}(u)$}{$N_{\omega}(u) +1$} \EndIf \EndFor \EndFor \Let{$\hat{u}$}{defined by \eqref{eq:mean_denoising}} \State \Return{$\hat{p}(u, \cdot)$, $\hat{u}$} \EndFunction \end{algorithmic} \end{algorithm} Note here that the output denoised version of the patch $\hat{p}(u, \omega)$ verifies the following equation \begin{equation*} \hat{p}(u, \omega) = \sum_{\veclet{t} \in T} \lambda_{\veclet{t}} P_{\veclet{t} + \omega}(u) \; , \qquad \lambda_{\veclet{t}} = \frac{\ind{\mathcal{AS}(u,\veclet{t},\omega) \leq a(\veclet{t})}}{\sum_{\veclet{s} \in T} \ind{\mathcal{AS}(u,\veclet{s},\omega) \leq a(\veclet{s})}} \; . \end{equation*} In the original NL-means method, we have \begin{equation}\lambda_{\veclet{t}} = \frac{\exp\left( -\frac{\mathcal{AS}(u, \veclet{t}, \omega)}{h^2}\right)}{\sum_{\veclet{t} \in T} \exp\left( -\frac{\mathcal{AS}(u, \veclet{t}, \omega)}{h^2}\right)} \; . \label{eq:original_nlmeans}\end{equation} Setting $h$ is not trivial and depends on many parameters (patch size, search window size, content of the original image). As in Algorithm \ref{alg:nlmeans_thres}, we denote $N_{\omega}(u) = \sum_{\veclet{t} \in T} \ind{\mathcal{AS}(u, \veclet{t}, \omega) \leq a(\veclet{t}) } $. The following proposition, similar to Proposition \ref{prop:a_contrario_bound}, gives a method for setting $a$. We say that an offset $\veclet{t}$ is a false alarm in a Gaussian white noise if the associated patch is not used in the denoising algorithm. In Proposition \ref{prop:a_contrario_bound_nlmeans} we choose $a$ in order to control the number of false alarms with high probability. \begin{prop} \label{prop:a_contrario_bound_nlmeans} Let $\operatorname{NFA}_{\text{max}} \in [0,|T|]$, $T$ given in \eqref{eq:patch_search_window} and let $a \in \mathbb{R}^{\Omega}$ be defined for any $\veclet{t} \in \Omega$ by \[a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}} / |T|\right) \; ,\]with background model being a Gaussian white noise $W$, \textit{i.e.} \ $f= \delta_0$ in Definition \ref{def:microtexture}. Let $T$ be defined in \eqref{eq:patch_search_window} and $N_{\omega}(W) \in \lbrace 0, \dots, T \rbrace$ the random number of selected patches used to denoise the patch $P_{\omega}(W)$, see Algorithm \ref{alg:nlmeans_thres}. Then for any $n \in \mathbb{N} \without{0}$ it holds that \[ \prob[0]{|T| - N_{\omega}(W) \geq n} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \; .\] \end{prop} \begin{proof} Using the Markov inequality, we have \[ \prob[0]{|T| - N_{\omega}(W) \geq n} \leq \frac{|T| - \sum_{\veclet{t} \in T} \expec{\ind{\mathcal{AS}(W,\veclet{t}, \omega) \leq a(\veclet{t})}}}{n} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \; . \] \end{proof} In this case the null hypothesis $\mathbb{P}_0$ is given by a standard Gaussian random field, which is a special case of the Gaussian random field models introduced in Section \ref{sec:gauss-model-cons}. In the next proposition, using the \textit{a~contrario} \ framework, we obtain probabilistic guarantees on the distance between the reconstructed patch $\hat{p}(u, \omega)$ and the true patch $P_{\omega}(u_0)$. \begin{prop} \label{prop:reconstruction} Let $U = u_0 + \sigma W$, where $W$ is a standard Gaussian white noise over $\Omega$, $u_0 \in \mathbb{R}^{\Omega}$ and $\sigma >0$. Let $\veclet{x} \in \Omega$ and $\omega = \veclet{x} + \omega_0$ be a fixed patch and let $\operatorname{NFA}_{\text{max}} \in [0,|T|]$. We introduce the random set $\hat{T} = \lbrace \veclet{t} \in T, \ \mathcal{AS}(U,\veclet{t}, \omega) \leq \sigma^2 a(\veclet{t}) \rbrace$ (the selected offsets) with $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}}/|T|\right)$ as in Proposition \ref{prop:a_contrario_bound_nlmeans} and $T$ defined in \eqref{eq:patch_search_window}. Let $a_T = \max_{\veclet{t} \in T} a(\veclet{t})$. Then for any $a_W>0$, setting $\varepsilon_{W} = 1- \prob{\| P_{\omega}(W) \|_2^2 \leq a_W \ | \ \hat{T}}$, we have \begin{equation} \label{eq:upp_bound_nlmeans} \prob{\| \hat{p}(U, \omega) - P_{\omega}(u_0) \|_2 \leq \sigma (a_T^{1/2} + a_W^{1/2}) \ | \ \hat{T}} \geq 1 - \varepsilon_W\; . \end{equation} \end{prop} \begin{proof} We have for any $\veclet{t} \in \hat{T}$ \[\al{\| P_{\veclet{t} + \omega}(U) - P_{\omega} (u_0) \|_2 &\leq \| P_{\veclet{t} + \omega}(U) - P_{\omega}(U) + P_{\omega}(U) - P_{\omega}(u_0) \|_2 \\ &\leq \| P_{\veclet{t} + \omega}(U) - P_{\omega}(U) \|_2 + \| P_{\omega}(U) - P_{\omega}(u_0) \|_2 \\ &\leq \sigma a_T^{1/2} +\sigma \| P_{\omega}(W) \|_2 \; .} \] This gives the following event inclusion for any $\veclet{t} \in \hat{T}$, \begin{equation*} \left\lbrace \| P_{\omega}(W) \|_2 \leq a_W^{1/2} \right\rbrace \subset \left\lbrace \| P_{\veclet{t} + \omega}(U) - P_{\omega} (u_0) \|_2 \leq \sigma ( a_T^{1/2} + a_W^{1/2} ) \right\rbrace \; , \end{equation*} We also have that by definition of $\varepsilon_W$ \begin{align*} &\prob{\| \hat{p}(U, \omega) - P_{\omega}(u_0) \|_2 \leq \sigma (a_T^{1/2}+a_W^{1/2}) \ | \ \hat{T}} \\ &\phantom{aaaaaaaaaaaaaaa} \geq \prob{\bigcap_{\veclet{t} \in \hat{T}} \lbrace \| P_{\veclet{t} + \omega}(U) - P_{\omega}(u_0) \|_2^2 \leq \sigma^2 (a_T^{1/2}+a_W^{1/2})^2 \rbrace \ | \ \hat{T} } \\ &\phantom{aaaaaaaaaaaaaaa} \geq \prob{ \| P_{\omega}(W) \|_2^2 \leq a_W \ | \ \hat{T} } \geq 1 - \varepsilon_W \; . \end{align*} \end{proof} In our applications we use Algorithm \ref{alg:nlmeans_thres} with $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}}/|T|\right)$. Therefore we need to compute $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}}/|T|\right)$ with a Gaussian white noise background model. We recall that in Section \ref{sec:detection-algorithm}, using Proposition \ref{prop:squared_exact}, we give a method to compute this quantity in general Gaussian settings. In the case of a Gaussian white noise, the next proposition shows that the eigenvalues can be computed without approximation. \begin{prop} \label{prop:eigenvalues} Let $\veclet{t} = (t_x, t_y) \in \mathbb{Z}^2 \without{0}$, $C_{\veclet{t}}$ as in Proposition \ref{prop:squared_exact} with $f = \delta_0$ and $\omega = \llbracket 0, p-1 \rrbracket^2$, with $p \in \mathbb{N}$. We have, expressing $C_{\veclet{t}}$ in the basis corresponding to the raster scan order on the $x$-axis \begin{equation*} C_{\veclet{t}} = \mkmatrix{B_0 & B_1 & \dots & B_{p-1} \\ B_1^{\top} & B_0 & \ddots & \vdots \\ \vdots & \ddots & B_0 & B_1 \\ B_{p-1}^{\top} & \dots & B_1^{\top} & B_0} + 2 \mathrm{Id} \; , \quad \begin{cases} B_{\ell} = D_{|t_y|} \in \mathcal{M}_{p}(\mathbb{R}) & \text{if } \ell = |t_x| \\ B_{\ell} = 0 & \text{otherwise} \end{cases} \end{equation*} where $D_j$ is a zero matrix with ones on the $j$-th diagona . The eigenvalues of $C_{\veclet{t}}$ are given by $\lambda_{m,k} = 4 \sin^2\left( \frac{k\pi}{2m} \right)$ with multiplicity $r_{m,k}$ where $ m \in \llbracket 2, q + 1 \rrbracket$, $k \in \llbracket 1, m - 1 \rrbracket$ and $q = \lceil \frac{p}{|t_x| \vee |t_y|} \rceil $. For any $m \in \llbracket 2, q+1 \rrbracket$, $k \in \llbracket 1, m-1 \rrbracket$ it holds \begin{enumerate}[label=(\alph*)] \item for any $k' \in \llbracket 1, m-1 \rrbracket$, $r_{m,k} = r_{m,k'} \; ;$ \item $r_{m,k} = 2 |t_x| |t_y| \ \text{if} \ 2\leq m < q \; ;$ \item $r_{m,k} = r_x r_y \ \text{if} \ m = q +1 \; ;$ \item $ \sum_{m=2}^{q+1} \sum_{k=1}^{m-1} r_{m,k} = p^2 \; ,$ \end{enumerate} with $r_x = \left( \lceil \frac{p}{|t_x|} \rceil -q \right)|t_x| + |t_x| - p_x$, where $p_x = |t_x|\lceil \frac{p}{|t_x|} \rceil -p$. We define $r_y$ in the same manner. A similar proposition holds if $t_y \neq 0$. \end{prop} \begin{proof} The proof is postponed to Appendix A. \end{proof} This property allows us to compute exactly the eigenvalues appearing in Proposition \ref{prop:squared_exact}. In Figure~\ref{fig:thres_wn} we illustrate that $a(\veclet{t})$ for fixed patch size ($8 \times 8$) and patch search window ($21 \times 21$). Thus in our implementation we suppose that $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}} / |T|\right)$ is constant and set its value to the mean of $a(\veclet{t})$ over $\veclet{t} \in T$. \begin{figure} \label{eigenvalues} \centering \subfloat[]{\includegraphics[width=0.4\linewidth]{./img/eigenvalues.jpg}} \qquad \subfloat[]{\includegraphics[width=0.4\linewidth]{./img/threshold_matrix.jpg}} \hfill \caption{\figuretitle{Thresholds dependency in $\operatorname{NFA}_{\text{max}}$} In (a) we display $a(\veclet{t}) =\mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}} / |T|\right)$ as a function of $\operatorname{NFA}_{\text{max}}$. The patch size is fixed to $8 \times 8$ and the offsets $\veclet{t}$ satisfy $\| \veclet{t} \|_{\infty} \leq 10$, hence $|T| = 21^2=441$. The red dashed line is given by $\max_{\veclet{t} \in T}a(\veclet{t})$ and the green dashed line by $\min_{\veclet{t} \in T}a(\veclet{t})$. The blue line represents the value obtained considering the simplifying assumption that patch domains do not overlap, see Proposition \ref{prop:squared_exact} and the remark that follow . The maximal increase between the maximum of $a(\veclet{t})$ and the minimum of $a(\veclet{t})$ is of $13.0 \%$. In (b) we display the mapping $\veclet{t} \ \mapsto \ a(\veclet{t})$ for $\operatorname{NFA}_{\text{max}} = 0.5$, the central pixel corresponds to $\veclet{t=0}$. Note that $a(\veclet{t})$ decreases as $\|\veclet{t}\|$ increases and is constant when, $\|\veclet{t}\|_{\infty} \geq 8$.} \label{fig:thres_wn} \end{figure} \subsection{Some experimental results} \label{sec:expe-results} In the following paragraph we present and comment some results of our threshold NL-means algorithm, see Algorithm \ref{alg:nlmeans_thres}. We recall that we use $a(\veclet{t}) = \sum_{\veclet{t} \in T} \mathsf{AP}^{-1}\left(\veclet{t}, \omega, 1 - \operatorname{NFA}_{\text{max}} / |T|\right) / |T|$. In Figure \ref{fig:denoised res} we present a first comparison with the NL-means algorithm. Perceptual results as well as Peak Signal to Noise Ratio ($\operatorname{PSNR}$ ) measurements \footnote{$\operatorname{PNSR}(u,v) = 10 \log_{10} \left( \frac{\max_{\Omega} u^2}{\| u - v\|_2^2}\right) \; .$} are commented. We also present the running time of the original NL-means algorithm and ours. The experiments were conducted with the following computer specifications: 16G RAM, 4 Intel Core i7-7500U CPU (2.70GHz). Results on other images than Barbara are displayed in Figure \ref{fig:nl_means_comp_vis}. \begin{figure} \centering \subfloat[]{\resizebox{.23\textwidth}{!}{\input{./img/zoom_barbara}}} \quad \subfloat[]{\resizebox{.23\textwidth}{!}{\input{./img/zoom_barbara_noise}}} \hfill \subfloat[\tiny $\operatorname{PSNR} = 29.81$, $\delta_t = 0.46s$]{\resizebox{.23\textwidth}{!}{\input{./img/zoom_barbara_thres}}} \quad \subfloat[\tiny $\operatorname{PSNR} = 29.29$, $\delta_t = 0.37s$]{\resizebox{.23\textwidth}{!}{\input{./img/zoom_barbara_nothres}}} \hfill \caption{\figuretitle{Visual results} In (a) we present an original image (Barbara) scaled between $0$ and $255$. In (b) we add Gaussian white noise with $\sigma = 10$. We recall that the patch domain is fixed to $\omega_0$ being a $8 \times 8$ square. In (c) we present the denoised results with NL-means threshold, Algorithm \ref{alg:nlmeans_thres}, where $\operatorname{NFA}_{\text{max}} = 4.41$, which corresponds to 1\% of rejected patches in the search window of a Gaussian white noise. In (d) we present the results obtained with the traditional NL-means algorithm with $h = 0.13 \sigma |\omega|$ (optimal $h$ for this noise level and this image with regard to the $\operatorname{PSNR}$ \ measure). The results are the same on the texture area for (c) and (d). The perceptual results on the zoomed region are satisfying, even though some regions are too smooth compared to the original image (a). In (c) and (d), $\delta_t$ is the running time of the algorithm. We can observe that our algorithm is slightly slower than NL-means.} \label{fig:denoised res} \end{figure} \captionsetup[subfigure]{labelformat=empty} \begin{figure} \centering \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/house.jpg}} \hfill \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/house_noise.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =31.67$, $\delta_t = 0.21s$]{\includegraphics[width=0.22\linewidth]{./img/house_thres.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =30.81$, $\delta_t = 0.07s$]{\includegraphics[width=0.22\linewidth]{./img/house_h_012.jpg}} \hfill \\ \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/boat.jpg}} \hfill \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/boat_noise.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =29.12$, $\delta_t = 0.46s$]{\includegraphics[width=0.22\linewidth]{./img/boat_thres.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =28.44$, $\delta_t = 0.39s$]{\includegraphics[width=0.22\linewidth]{./img/boat_h_012.jpg}} \hfill \\ \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/peppers.jpg}} \hfill \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/peppers_noise.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =29.43$, $\delta_t = 0.22s$]{\includegraphics[width=0.22\linewidth]{./img/peppers_thres.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =29.03$, $\delta_t = 0.07s$]{\includegraphics[width=0.22\linewidth]{./img/peppers_h_013.jpg}} \hfill \\ \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/cameraman.jpg}} \hfill \subfloat[]{\includegraphics[width=0.22\linewidth]{./img/cameraman_noise.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =28.82$, $\delta_t = 0.22s$]{\includegraphics[width=0.22\linewidth]{./img/cameraman_thres.jpg}} \hfill \subfloat[\scriptsize $\operatorname{PSNR} =28.68$, $\delta_t = 0.09s$]{\includegraphics[width=0.22\linewidth]{./img/cameraman_h_014.jpg}} \hfill \\ \caption{\figuretitle{NL-means comparison} In this figure we compare Algorithm \ref{alg:nlmeans_thres} with the traditional NL-means algorithm. Here $\omega_0$ is fixed to be a $8 \times 8$ square. The first column contains clean images, the second column represents the same images corrupted by a Gaussian noise with $\sigma = 20$. The third column is the output of Algorithm \ref{alg:nlmeans_thres} with $\operatorname{NFA}_{\text{max}}$ fixed to $4.41$ and the last column is the output of the NL-means algorithm for the optimal value of $h$ (with regards to the PSNR), see \eqref{eq:original_nlmeans}. Perceptual results and $\operatorname{PSNR}$ \ are comparable, even though our algorithm yields slightly better $\operatorname{PSNR}$ \ values. We also present the running times $\delta_t$ of both algorithm. Our algorithm is slower than NL-means as it computes the threshold before running the NL-means algorithm.} \label{fig:nl_means_comp_vis} \end{figure} \captionsetup[subfigure]{labelformat=parens} If the threshold $a(\veclet{t})$ is high, \textit{i.e.} \ $\operatorname{NFA}_{\text{max}} \ll |T|$ then almost no patch is rejected, which means that almost all patches are used in the denoising process. In consequence, the output denoised image is very smooth. This smoothness is a correct guess for constant patches. However, this proposition does not hold when the region contains details. Indeed, in this case details are lost due to the averaging process. By setting a conservative threshold, \textit{e.g.} \ $\operatorname{NFA}_{\text{max}} / |T| \approx 0.1$, for example, we reject all the patches for which the structure does not strongly match the one of the input patch, see Figure \ref{fig:number}. This conservative property of the algorithm ensures that we can control the loss of information in the denoised image, see Proposition \ref{prop:reconstruction}. However, if no patch, other than the input patch itself, is detected as similar we highly overfit the original noise. Many algorithms such as BM3D, see \cite{dabov2007image}, solve this problem by treating this case as an exception, applying a specific denoising method in this situation. We show the differences between our version of NL-means and BM3D in Figure \ref{fig:bm3D} . \begin{figure} \centering \subfloat[$\operatorname{NFA}_{\text{max}} / |T| = 0.2$]{\includegraphics[width=.28\linewidth]{./img/number_080.jpg}} \hfill \subfloat[$\operatorname{NFA}_{\text{max}} / |T| = 0.1$]{\includegraphics[width=.28\linewidth]{./img/number_090.jpg}} \hfill \subfloat[$\operatorname{NFA}_{\text{max}} / |T| = 0.01$]{\includegraphics[width=.33\linewidth]{./img/number_099.jpg}} \hfill \caption{\figuretitle{Number of detections} In this figure we present, for each denoised pixel, the number of detected offsets used to compute the denoised patch, \textit{i.e.} \ the cardinality of $\hat{T}$, see Proposition \ref{prop:reconstruction}. A white pixel means that the number of detected offsets is maximal and a black pixel means that the number of detected offsets is $1$, \textit{i.e.} \ the patch is not denoised. As $\operatorname{NFA}_{\text{max}}$ decreases the number of detected offsets increases. Note that $|\hat{T}|$ is maximal, \textit{i.e.} \ equals to $21^2 = 441$, for constant regions. For $\operatorname{NFA}_{\text{max}} / |T| = 0.1$, pixels located in textured neighborhoods use in average $20$ to $40$ patches to perform denoising.} \label{fig:number} \end{figure} \begin{figure} \centering \subfloat[original]{\includegraphics[width=0.24\linewidth]{./img/barbara.jpg}} \hfill \subfloat[BM3D]{\includegraphics[width=0.24\linewidth]{./img/denoised_bm3D.jpg}} \hfill \subfloat[$\operatorname{NFA}_{\text{max}} / |T| = 0.01$]{\includegraphics[width=0.24\linewidth]{./img/barbara_nlnaive.jpg}} \hfill \subfloat[$\operatorname{NFA}_{\text{max}} / |T| = 0.1$]{\includegraphics[width=0.24\linewidth]{./img/denoise_20_nlmeans_09.jpg}} \hfill \caption{\figuretitle{Comparison with BM3D} We compare Algorithm \ref{alg:nlmeans_thres} to BM3D \cite{dabov2007image}. The original image (Barbara) is presented in (a). We consider a noisy version of the input image with $\sigma =20$. In (b) we present the ouput of BM3D, with default parameters, see \cite{lebrun2012analysis}. The result in (c) corresponds to the output of Algorithm \ref{alg:nlmeans_thres} with $\operatorname{NFA}_{\text{max}} / |T| = 0.01$. The output (c) is too blurry compared to (b). In order to correct this behavior we set $\operatorname{NFA}_{\text{max}} / |T| = 0.1$ in (d), \textit{i.e.} \ increase the global threshold and some improvements are noticeable. However the image remains blurry and artifacts due to the overfitting of the noise appear, this is known as the \textit{rare patch effect} in \cite{duval2011bias}. For instance, some patches in the scarf are not denoised anymore.} \label{fig:bm3D} \end{figure} In Figure \ref{fig:PSNR}, we show that Algorithm \ref{alg:nlmeans_thres} performs better than the original NL-means algorithm. By setting $\operatorname{NFA}_{\text{max}} / |T| = 0.01$ we obtain that the $\operatorname{PSNR}$ \ of the denoised image is better than the one of NL-means for nearly every value of $h$. \begin{figure} \centering \subfloat[$\sigma = 10$]{\includegraphics[width=.333\linewidth]{./img/psnr_10.jpg}} \hfill \subfloat[$\sigma = 20$]{\includegraphics[width=.333\linewidth]{./img/psnr_20.jpg}} \hfill \subfloat[$\sigma = 40$]{\includegraphics[width=.333\linewidth]{./img/psnr_40.jpg}} \hfill \caption{\figuretitle{$\operatorname{PSNR}$ \ study} In this figure we present the evolution of the $\operatorname{PSNR}$ \ for different values of the parameter $h$ of the original NL-means method, see \eqref{eq:original_nlmeans}, in blue, computed on the Barbara image. The $x$-axis represents $\frac{h}{\sigma | \omega |}$. The orange dashed line is the $\operatorname{PSNR}$ \ obtained for the threshold NL-means algorithm (Algorithm~\ref{alg:nlmeans_thres}) with $\operatorname{NFA}_{\text{max}} / |T| = 0.01$. Except for low levels of noise the proposed method gives better $\operatorname{PSNR}$ \ values than the original implementation of NL-means algorithm for every choice of $h$.} \label{fig:PSNR} \end{figure} Let us emphasize that our goal is not to provide a new state-of-the-art denoising algorithm. Indeed we never obtain better denoising results than the BM3D algorithm. However, our algorithm slightly improves the original NL-means algorithm. It shows that statistical testing can be efficiently used to measure the similarity between patches and therefore provides a robust way to perform the weighted average in this algorithm. \section{Periodicity analysis} \label{sec:application to periodicity analysis} \subsection{Existing algorithms} \label{sec:existing_algorithms} In the following sections we use our patch similarity detection algorithm, see Algorithm \ref{alg:auto-similaritydetection}, to analyze images exhibiting periodicity features. Let $\Omega \subset \mathbb{Z}^2$ be a finite domain and $\omega \subset \Omega$ a finite patch domain. Periodicity detection is a long-standing problem in texture analysis \cite{zucker1980finding}. First algorithms used the quantization of images, relying on co-occurrence matrices and statistical tools like $\chi_2$ tests or $\kappa$ tests. Global methods extract peaks in the frequency domain (Fourier spectrum)~\cite{matsuyama1983structural} or in the spatial domain (autocorrelation). In \cite{haralick1973textural} the notion of inertia is introduced. It is defined for any $\veclet{t} \in \Omega$ by $\mathcal{I}(\veclet{t}) = \sum_{(i,j) \in \llbracket 0,N_g \rrbracket^2}(i-j)^2 \left(\sum_{\veclet{z} \in \Omega}\mathbb{1}_{\dot{u}(\veclet{z}) = i}\mathbb{1}_{\dot{u}(\veclet{z+t}) = j}\right)$, where $u$ is a quantized image on $N_g+1$ gray levels. In \cite{conners1980toward}, the authors show that the local minima of the inertia measurement can be used to assess periodicity. Similarly, we introduce the $\omega$-inertia for any $\veclet{t} \in \Omega$ by $\mathcal{I}_{\omega}(\veclet{t}) = \sum_{(i,j) \in \llbracket 0,N_g \rrbracket^2}(i-j)^2 \left(\sum_{\veclet{z} \in \omega}\mathbb{1}_{\dot{u}(\veclet{z}) = i}\mathbb{1}_{\dot{u}(\veclet{z+t}) = j}\right)$. The following proposition extends to a local framework results from \cite{oh1999fast}. \begin{prop} \label{prop:cooccurence} Let $u \in \mathbb{R}^{\Omega}$. Suppose that $u$ is quantized, \textit{i.e.} \ there exists $N_g \in \mathbb{N}$ such that for any $\veclet{x} \in \Omega$, $u(\veclet{x}) \in \llbracket 0,N_g \rrbracket$. We have $\mathcal{I}_{\omega}(\veclet{t}) = \mathcal{AS}(u,\veclet{t},\omega)$. \end{prop} \begin{proof} For any $\veclet{t} \in \Omega$ we have \[\al{\mathcal{I}_{\omega}(\veclet{t}) &= \summ{(i,j) \in \llbracket 0,N_g \rrbracket^2}{}{(i-j)^2\summ{\veclet{x} \in \omega}{}{\mathbb{1}_{\dot{u}(\veclet{x}) = i}\mathbb{1}_{\dot{u}(\veclet{x+t}) = j}}} = \summ{\veclet{x} \in \omega, (i,j) \in \llbracket 0,N_g \rrbracket^2}{}{(i-j)^2 \mathbb{1}_{\dot{u}(\veclet{x}) = i}\mathbb{1}_{\dot{u}(\veclet{x+t}) = j}}\\ &= \summ{\veclet{x} \in \omega}{}{(\dot{u}(\veclet{x}) - \dot{u}(\veclet{x+t}))^2} = \mathcal{AS}(u,\veclet{t},\omega).}\] \end{proof} If $\omega = \Omega$ then the $\omega$-inertia statistics is exactly the inertia introduced in \cite{haralick1973textural} and the result is due to \cite{oh1999fast}. \subsection{Algorithm and properties} \label{sec:algorithm and properites} Lattice detection is closely related to periodicity analysis, since identifying a lattice is similar to extracting periodic or pseudo-periodic structures up to deformations and approximations. A state-of-the-art algorithm proposed in \cite{park2009deformed} uses a recursive framework which consists in 1) a lattice model proposal based on detectors such as Kanade-Lucas-Tomasi ($\operatorname{KLT}$ ) feature trackers \cite{lucas1981iterative}, 2) spatial tracking using inference in a probabilistic graphical model, 3) spatial warping correcting the lattice deformations in the original image. In this section we propose a new algorithm for lattice detection. The lattice proposal step 1) is replaced by an Euclidean auto-similarity matching detection (see Section \ref{sec:detection-algorithm} and Algorithm~\ref{alg:auto-similaritydetection}) where the patch domain $\omega$ is fixed. Using these detections we build a graph with a few nodes (usually approximately $20$ nodes for a $256 \times 256$ image). We use the same notation for the detection mapping $\veclet{t} \mapsto \mathbb{1}_{\mathcal{AS}_i(u,\veclet{t}, \omega) \leq a(\veclet{t})}$ \textit{i.e.} \ the $D_{map}$ output of Algorithm \ref{alg:auto-similaritydetection}, which is a binary function over the offsets, and the set of detected offsets. We recall that two pixel coordinates $\veclet{x}$ and $\veclet{y}$ are said to be $8$-connected if $\veclet{x} = \veclet{y} + (\delta_x, \delta_y)$ with $\delta_x, \delta_y \in \lbrace -1, 0, 1 \rbrace$. The graph $\mathscr{G} = (V,E)$ is then built as follows: \begin{itemize}[label = $\blacktriangleright$] \item \textbf{Vertices}: for each 8-connected component, $\mathscr{C}_k$ in $D_{map}$ we note $\veclet{v}_k$ one position for which the minimum of $\mathcal{AS}(u,\veclet{t},\omega)$ over $\mathscr{C}_k$ is achieved. The set of vertices $V$ is defined as $V= \seq[1][N_{\mathscr{C}}]{\veclet{v}}{k}$ where $N_{\mathscr{C}}$ is the number of 8-connected components in $D_{map}$ ; \item \textbf{Edges}: each vertex is linked with its four nearest neighbors in the sense of the Euclidean distance, defining four unoriented edges. \end{itemize} Refering to the three steps of \cite{park2009deformed} we present our model to replace step 2) (\textit{i.e.} \ the inference in a probabilistic graphical model) and introduce the approximated lattice hypothesis defined on a graph. \begin{mydef}[Approximated lattice hypothesis] \label{def:approximated_lattice_hypothesis} Let $\mathscr{G} = (V,E)$ be a random graph with $V \subset \mathbb{R}^2$. We say that $\mathscr{G}$ follows the approximated lattice hypothesis if there exists a basis $B = (b_1,b_2)$ of $\mathbb{R}^2$ and, for each edge $\veclet{e} \in E$, a couple of integers $(m_{\veclet{e}},n_{\veclet{e}}) \in \mathbb{Z}^2$ such that $\veclet{e}$ has the same distribution as $m_{\veclet{e}} b_1 + n_{\veclet{e}} b_2 + \sigma Z_{\veclet{e}}$, with $(Z_{\veclet{e}})_{\veclet{e} \in E}$ independent standard Gaussian random variables in $\mathbb{R}^2$ and $\sigma >0$. We denote by $M$ the vector of all coefficients $(m_{\veclet{e}},n_{\veclet{e}})_{\veclet{e} \in E} \in\mathbb{Z}^{2\vertt{E}} $. \end{mydef} Our goal is to perform inference in the statistical model defined by the following log-posterior \begin{equation} \mathscr{L}(B,M,\sigma^2 |E) = -2(\vertt{E} +1)\log(\sigma^2) -\frac{1}{2\sigma^2}\underbrace{\left( \summ{\veclet{e} \in E}{}{\| m_{\veclet{e}}b_1 + n_{\veclet{e}}b_2 - \veclet{\veclet{e}}\|^2} + r(B,M) \right)}_{q(B,M | E)} \; , \label{eq:log-lik} \end{equation} where $r(B,M) = \delta_B \|B\|_2^2 + \delta_M \| M \|_2^2$ with $\delta_B, \delta_M \geq 0$. A discussion on the dependence of the model on the hyperparameters $(\delta_B, \delta_M)$ is conducted in Figure \ref{fig:hyperparam}. Finding the $\operatorname{MLE}$ \ of this full log-posterior is a non-convex, integer problem. However performing the minimization alternatively on $B$ and $M$ is easier since at each step we only have a quadratic function to minimize. Minimizing a positive-definite quadratic function over $\mathbb{Z}^2$ is equivalent to finding the vector of minimum norm in a lattice. This last formulation is known as the Shortest Vector Problem ($\operatorname{SVP}$ ) which is a challenging problem \cite{micciancio2001svp} (though it is not known if it is a $\operatorname{NP}$ -hard problem). We replace this minimization procedure over a lattice by a minimization over $\mathbb{R}^2$ followed by a rounding of this relaxed solution. \begin{algorithm} \caption{Lattice detection -- Alternate minimization \label{alg:alternate}} \begin{algorithmic}[1] \Function{Alternate-minimization}{$E$, $\delta_B$, $\delta_M$, $N_{it}$} \Let{$M_0$}{0} \Let{$B_0$}{initialization procedure} \Comment{initialization discussed in the end of Section \ref{sec:algorithm and properites}} \For{$n \gets 0 \ \textrm{to} \ N_{it} - 1$} \Let{$\tilde{M}$}{$\underset{ M \in \mathbb{R}^{2\vertt{E}}}{\operatorname{argmin}} \ q(B_n, M | E)$} \Comment{expression given in Proposition \ref{prop:alternate_update}} \If{ $q\left(E | B_n, [\tilde{M}]\right) <q \left(E | B_n, M_n\right) $} \Comment{$[\cdot]$ is the nearest integer operator} \Let{$M_{n+1}$}{$[\tilde{M}]$} \Else \Let{$M_{n+1}$}{$M_n$} \EndIf \Let{$B_{n+1}$}{$\underset{B \in \mathbb{R}^4}{\operatorname{argmin}} \ q(B, M_{n+1}|E)$} \Comment{expression given in Proposition \ref{prop:alternate_update}} \EndFor \Let{$\sigma_{N_{it}}^2$}{$\underset{\sigma^2 \in \mathbb{R_+}}{\operatorname{argmin}} \ -\mathscr{L}( B_{N_{it}},M_{N_{it}},\sigma^2| E)$} \State \Return{$B_{N_{it}}, M_{N_{it}}, \sigma_{N_{it}}^2$} \EndFunction \end{algorithmic} \end{algorithm} For any $\sigma >0$ we denote by $\mathscr{L}_n(\sigma) = \mathscr{L}(B_n,M_n,\sigma^2| E)$, with $n \in \mathbb{N}$, the log-posterior sequence. \begin{prop}[Alternate minimization update rule] \label{prop:alternate_update} In Algorithm \ref{alg:alternate}, we get for any $n \in \mathbb{N}$ \[ \tilde{M} = \left( \Lambda_{B_n} \otimes \operatorname{Id}_{\vertt{E}} \right)^{-1} E_{B_n} \in \mathbb{R}^{2 \vertt{E}} \; , \qquad B_{n+1} = \left( \Lambda_{M_{n+1}} \otimes \operatorname{Id}_{2} \right)^{-1} E_{M_{n+1}}\in \mathbb{R}^4 \; , \] with $\otimes$ the tensor product between matrices and \begin{enumerate}[label=(\alph*)] \item $\Lambda_B = \left( \begin{matrix} \|b_1\|^2+\delta_B & \langle b_1,b_2 \rangle \\ \langle b_1,b_2 \rangle & \|b_2\|^2+\delta_B \end{matrix} \right) \; , \qquad \Lambda_M = \left( \begin{matrix} \|M_1\|^2+\delta_M & \langle M_1,M_2 \rangle \\ \langle M_1,M_2 \rangle & \|M_2\|^2+\delta_M \end{matrix} \right) \; ;$ \item $E_B = \left( \begin{matrix} (\langle \veclet{e}, b_1 \rangle)_{\veclet{e} \in E} \\ (\langle \veclet{e}, b_2 \rangle)_{\veclet{e} \in E} \end{matrix} \right) \; , \qquad E_M = \left( \begin{matrix} \summ{\veclet{e} \in E}{}{m_{\veclet{e}}\veclet{e}} \\ \summ{\veclet{e} \in E}{}{n_{\veclet{e}} \veclet{e}} \end{matrix}\right) \; .$ \end{enumerate} \end{prop} \begin{proof} The proof is postponed to Appendix B. \end{proof} Note that if $B$ is orthogonal, \textit{i.e.} \ $\langle b_1, b_2 \rangle = 0$ then $\Lambda_B$ is diagonal and the proposed method is the exact solution to the minimization problem over $\mathbb{Z}^2$. \begin{thm}[Convergence in finite time] For any $\sigma >0$, $(\mathscr{L}_n(\sigma))_{n \in \mathbb{N}}$ is a non-decreasing sequence. In addition, $\seq{B}{n}$ and $\seq{M}{n}$ converge in a finite number of iterations. \end{thm} \begin{proof} $(\mathscr{L}_n(\sigma))_{n \in \mathbb{N}}$ is non-decreasing since for any $n\in \mathbb{N}$, $\mathscr{L}_n(\sigma) \leq \mathscr{L}( B_{n},M_{n+1},\sigma^2 | E) \leq \mathscr{L}_{n+1}(\sigma)$. Let us show that the sequences $(M_n)_{n \in \mathbb{N}}$ and $(B_n)_{n \in \mathbb{N}}$ are bounded. Because $(\mathscr{L}_n(\sigma))_{n \in \mathbb{N}}$ is non-decreasing, the sequence $\left(q(B_n, M_n|E) \right)_{n \in \mathbb{N}}$ is non-increasing. We obtain that \[ \delta_M \| M_n \|^2 \leq q(B_0, M_0 | E) \; , \qquad \delta_B \| B_n \|^2 \leq q(B_0, M_0 | E) \; . \] The sequence $\seq{M}{n}$ is bounded thus we can extract a converging subsequence. Since $\seq{M}{n}$ takes value in $\mathbb{Z}^{2 \vertt{E}}$, this subsequence is stationary with value $M$. Let $n_0 \in \mathbb{N}$ be the first time we hit value $M$. Let $n \in \mathbb{N}$, with $n \ge n_0+1$, there exists $n_1 \in \mathbb{N}$, with $n_1 \ge n$ such that $M_{n_1} = M_{n_0}$ thus \[ \mathscr{L}_{n_0}(\sigma) \leq \mathscr{L}_{n_0+1}(\sigma) \leq \mathscr{L}_n(\sigma) \leq \mathscr{L}(B_{n_1-1}, M_{n_1}, \sigma^2 | E) \leq \mathscr{L}(B_{n_1-1}, M_{n_0}, \sigma^2 | E) \leq \mathscr{L}_{n_0}(\sigma) \; .\] Hence for every $n \ge n_0+1$, $\mathscr{L}_n(\sigma) = \mathscr{L}( B_n, M_n, \sigma^2 | E) = \tilde{\mathscr{L}}(\sigma)$. Suppose there exists $n \ge n_0+1$ such that $M_n \neq M_{n+1}$ this means that $\mathscr{L}(B_n,M_{n+1},\sigma^2|E) > \mathscr{L}_n(\sigma)$ (because of lines 6 and 7 of Algorithm \ref{alg:alternate}) which is absurd. Thus $\seq{M}{n}$ is stationary and so is $\seq{B}{n}$. \end{proof} In Algorithm \ref{alg:alternate} $M_0$ is initialized with zero and $B_0$ is defined as an orthonormal (up to a dilatation factor) direct basis where the first vector is given by an edge with median norm in $E$. \begin{figure} \centering \subfloat[$\delta_M =0$ $\delta_B =0$]{\includegraphics[width=.24\linewidth]{./img/bad_basis_d_true_out.jpg}} \hfill \subfloat[$\delta_M = 5$ $\delta_B =10^{-1}$]{\includegraphics[width=.24\linewidth]{./img/correc_basis_d_true_out.jpg}} \hfill \subfloat[$\delta_M = 9$ $\delta_B =10^{-1}$]{\includegraphics[width=.24\linewidth]{./img/good_basis_d_true_out.jpg}} \\ \caption{\figuretitle{Influence of hyperparameters} In this experiment we assess the importance of the hyperparameters. We consider Algorithm \ref{alg:alternate} with input graph a detection map, output of Algorithm \ref{alg:auto-similaritydetection}. The initialization in the three cases is the canonical basis $((0,1), (1,0))$. In (a), since the initial basis vectors are a local minimum to the optimization problem, the algorithm converges after one iteration. However, this is not perceptually satisfying. Setting $\delta_M = 5$ and $\delta_B = 10^{-1}$ in (b) the true observed lattice is a sub-lattice of the output lattice of Algorithm~\ref{alg:alternate}. Increasing $\delta_M$ up to 9, in (c) we obtain a perceptually correct lattice. For $\delta_M$ larger than 10, the basis vectors go to 0. Only the regularizing term is minimized by the optimization procedure and the data-attachment term is not taken into account. Experimentally we found that the choice of $\delta_M$ is more flexible and that $\delta_M \in (1,20)$ gives satisfying perceptual results if the initialization heuristics proposed in Section \ref{sec:algorithm and properites} is chosen.} \label{fig:hyperparam} \end{figure} \subsection{Experimental results} \label{sec:experimental-results} Combining the results of Section \ref{sec:algorithm and properites} and Section \ref{sec:detection-algorithm} we obtain an algorithm to extract lattices in images, see Figure \ref{fig:lattice_detec}. In what follows we perform lattice detection using Algorithm \ref{alg:auto-similaritydetection} in order to extract auto-similarity given a patch in an original image $u$, which implies that the patch domain $\omega$ is set by the user. Recall that in Algorithm~\ref{alg:auto-similaritydetection}, the eigenvalues of the covariance matrix in Proposition \ref{prop:squared_exact} are approximated, and that the cumulative distribution function of the quadratic form in Gaussian random variables is computed via the Wood F method \cite{wood1989f}. Lattice detection is performed using Algorithm~\ref{alg:alternate} with parameters $\delta_M = 10$ and $\delta_B = 10^{-2}$. \begin{figure} \centering \input{./img/lattice_detec.tex} \caption{\figuretitle{Lattice proposal algorithm} Lattice detection and extraction in images require a patch from the user and compute a binary image containing all the offsets with correct similarity as well as a lattice matching the underlying graph. The patch auto-similarity detection step was presented in Section \ref{sec:detection-algorithm}. The lattice detection step was presented in Section \ref{sec:algorithm and properites}. The first image is the input, the second one is the output of the detection algorithm. In the last step we show the original image with red squares placed on the computed lattice. Behind this image, the unoriented edges of the graph are shown in red.} \label{fig:lattice_detec} \end{figure} \subsubsection{Escher paving} \label{sec:escher_paving} In this section we study art images, Escher pavings, with strongly periodic structure. We investigate the following parameters of our lattice detection algorithm: \begin{enumerate}[label = (\alph*)] \item background microtexture model $\mathbb{P}_0$, \item $\operatorname{NFA}_{\text{max}}$ \ parameter in Algorithm \ref{alg:auto-similaritydetection}, \item patch domain $\omega$. \end{enumerate} \paragraph{Microtexture model} We confirm that the choice of the microtexture model will influence the detected geometrical structures. The more structured is the background noise model the less we obtain detections. This situation is considered in Figure \ref{fig:microtexture_model}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_graph.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_out.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_ADSN.jpg}} \\ \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_whiten_graph.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_whiten_out.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/white_noise.jpg}} \\ \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_smallNFA_whiten_graph.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_smallNFA_whiten_out.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/flyinghorses_1959_grey_smallNFA_whitenrandom_out.jpg}} \\ \caption{\figuretitle{Choice of the microtexture model} In this experiment we discuss the choice of the \textit{a~contrario} \ background microtexture model. In the left column we display the graph obtained after the detection step. In the middle column we superpose the proposed lattice on the original image. The original patch is drawn in green, obtained basis lattice vectors are in cyan, and red squares are placed onto the proposed lattice. In (a) and (b) the microtexture model is given by \eqref{eq:gaussian_model} and $\operatorname{NFA}_{\text{max}}$ \ is set to $10$. A sample of this model is presented in (c). Obtained results match the perceptual lattice. In (d), (e), (g) and (h) the microtexture model is a Gaussian white noise model with variance equal to the empirical variance of the original image. Sample from this Gaussian white noise is presented in (f). In (d) and (e), $\operatorname{NFA}_{\text{max}}$ \ is set to $10 . This leads to an excessive number of detections in the input image. In order to obtain the perceptual lattice found in (b) with a Gaussian white noise model we must set the $\operatorname{NFA}_{\text{max}}$ \ parameter to $10^{-111}$. Results are presented in experiments (g), (h) and (i). Image (h) is also an example for which the median initialization for $B_0$ in Algorithm \ref{alg:alternate} identifies a non satisfying local minimum. This situation is corrected in (i) with random initialization for $B_0$. In (h) final log-posterior value is $-565.5$ which is inferior to the final log-posterior value in (i): $-542.1$. Thus (i) gives a better local maximum of the full log-posterior than (h).} \label{fig:microtexture_model} \end{figure} \paragraph{$\operatorname{NFA}_{\text{max}}$ \ parameter} Using a more adapted microtexture model as background model we gain robustness compared to other less structured models such as a Gaussian white noise. However, $\operatorname{NFA}_{\text{max}}$ \ must be set carefully otherwise two situations can occur: \begin{enumerate}[label=(\alph*)] \item if $\operatorname{NFA}_{\text{max}}$ \ is too high, too many detections can be obtained (true perceptual detections are not differentiated from false positives) ; \item if $\operatorname{NFA}_{\text{max}}$ \ is too low, we fail to identify important perceptual structures in the image. \end{enumerate} We observe that a general good practice is to set $\operatorname{NFA}_{\text{max}}$ \ equal to $10$, see Figure \ref{fig:NFA}. However, if the input patch is corrupted one may increase this parameter up to $10^2$ or $10^3$, see Figure \ref{fig:preprocessing} and Figure \ref{fig:homography}. \begin{figure} \centering \subfloat{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_smallNFA_out.jpg}} \hfill \subfloat{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_out.jpg}} \hfill \subfloat{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_bigNFA_out.jpg}} \\ \setcounter{subfigure}{0} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_smallNFA_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.24\linewidth]{./img/horsemen_1946_grey_bigNFA_graph.jpg}} \hfill \caption{\figuretitle{Choice of Number of False Alarms} In this experiment we discuss the choice of the $\operatorname{NFA}_{\text{max}}$ \ parameter in the \textit{a~contrario} \ framework in the case where the underlying microtexture model is given by \eqref{eq:gaussian_model . Each column corresponds to a pair of images: the returned lattice and its associated underlying graph. In (a), $\operatorname{NFA}_{\text{max}}$ \ is set to 1. Detections are correct, there are not enough points to precisely retrieve the perceptual lattice. In (b), $\operatorname{NFA}_{\text{max}}$ \ is set to $10$. The estimated lattice is correct. In (c), $\operatorname{NFA}_{\text{max}}$ \ is set to $10^3$. In this case we obtain false detections which lead to an incorrect final lattice. Note that large detection zones in the binary image (c) are due to the non-validity of the Wood F approximation for some offsets. This behavior is also present in (a) and (b) but less noticeable.} \label{fig:NFA} \end{figure} \paragraph{Patch position} Patch position and size are crucial in our detection model, since we rely on local properties of the image. As shown in Figure \ref{fig:pos_size} these parameters should be carefully selected by the user. However, for particular applications such as lattice extraction for crystallographic purposes, there exist procedures to extract primitive cells \cite{mevenkamp2015unsupervised}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp10_px_123_py_87__fusion.jpg}} \hspace{0.2cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp15_px_123_py_87__fusion.jpg}} \hspace{0.2cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp20_px_123_py_87__fusion.jpg}} \\ \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp10_px_138_py_177__fusion.jpg}} \hspace{0.2cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp15_px_138_py_177__fusion.jpg}} \hspace{0.2cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp20_px_138_py_177__fusion.jpg}} \\ \subfloat[$10 \times 10$]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp10_px_231_py_137__fusion.jpg}} \hspace{0.2cm} \subfloat[$15 \times 15$]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp15_px_231_py_137__fusion.jpg}} \hspace{0.2cm} \subfloat[$20 \times 20$]{\includegraphics[width=.24\linewidth]{./img/pattern_Lp20_px_231_py_137__fusion.jpg}} \\ \caption{\figuretitle{Influence of patch size and patch position} For each experiment $\operatorname{NFA}_{\text{max}}$ \ is set to $10^4$, \textit{i.e.} \ 4 \% of the pixels. In most cases lower $\operatorname{NFA}_{\text{max}}$ \ could be used but setting a high $\operatorname{NFA}_{\text{max}}$ \ ensures that we always get detections even if the patch only contains microtexture information. Each row corresponds to a lattice proposal with same patch position but different patch sizes: $10 \times 10$ for the left column, $15 \times 15$ for the middle one and $20 \times 20$ for the right one. Each image represents the superposed proposed lattice on the original image. On the bottom-right of each image we display the underlying graph as well as the binary detection. On the first row the patch contains only a white region with a few gray pixels. The influence of these pixels is visible for small patch sizes (a) but is no longer taken into account for larger patch sizes, (b) and (c). On the second row the patch contains gray microtexture which has some local structure. We identify large similarity regions and no perceptual lattice is retrieved in (d), (e) and (f). The situation is different on the third row. The $10 \times 10$ patch contains only uniform black information in (g), but the situation changes as the patch sizes grows. In (h), the patch intersects black, gray and white zones. The graph is much sparser and the lattice is close to the perceptual on . In (i), the patch size is large enough to cover large areas of the three gray levels and the perceptual lattice is identified.} \label{fig:pos_size} \end{figure} \subsubsection{Crystallography images} Defect localization, noise reduction, correction of crystalline structures in images are central tasks in crystallography. Usually, they require the knowledge of the geometry of a perfect underlying crystal. In our experiments we manually identify the geometry of the periodic crystal, which allows for multiple structures in one image, provided a user input of the primitive cell in a lattice. This primitive cell extraction could be automated \cite{mevenkamp2015unsupervised}. In Figure \ref{fig:lattices_algo}, we present an example of multiple geometry extraction. Statistics like angle and period can be retrieved using the estimated basis vectors. This image contains two lattices and the locality of our measurements allows for the detection of both structures. Using windowed Fourier transform could be efficient to obtain local measurements on the periodicity of these images since the information is highly frequential. However in order to obtain the same detection map as Algorithm \ref{alg:auto-similaritydetection} one must carefully set the threshold parameter, $\operatorname{NFA}_{\text{max}}$. This situation is illustrated in Figure \ref{fig:fourier_comp}. Finally we assess the precision of our measurements by comparing our results with a model used in crystallography, see Figure \ref{fig:crystallo}. We indeed retrieve one of the possible bases used to describe these lattices. However, the symmetry constraints are not present in the identified basis. To obtain another basis, one must relax the regularization parameters. A more natural way to obtain the desired primitive cell would be to introduce symmetry constraints in the graphical model formulation in \eqref{eq:log-lik}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.24\linewidth]{./img/left_grid_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.24\linewidth]{./img/left_grid_out.jpg}} \hfill \subfloat[]{\includegraphics[width=.24\linewidth]{./img/right_grid_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.24\linewidth]{./img/right_grid_out.jpg}} \caption{\figuretitle{Lattice extraction}In this experiment we consider a crystallographic image (an orthorombic $\textrm{NiZr}$ alloy) and set $\operatorname{NFA}_{\text{max}}$ \ to $10^2$. Two lattices are present in this image and they are correctly identified in (b) and (d). Note that in (a), respectively in (c), mostly points in the left, respectively right, part of the image are identified, thus yielding correct lattice identification. Points which should be identified and are discarded nonetheless correspond to regions in which we observe contrast variation. Image courtesy of Denis Gratias.} \label{fig:lattices_algo} \end{figure} \begin{figure} \centering \subfloat[]{\input{./img/zoom_crystallo}} \hfill \subfloat[$90\%$]{\includegraphics[width=.2\linewidth]{./img/autoco_th90.jpg}} \hfill \subfloat[$95\%$]{\includegraphics[width=.2\linewidth]{./img/autoco_th95.jpg}} \hfill \subfloat[$99\%$]{\includegraphics[width=.2\linewidth]{./img/autoco_th99.jpg}} \hfill \caption{\figuretitle{Comparison with Fourier based methods}Since the original image can be segmented in two highly periodic components, Fourier methods might be well-adapted to the lattice extraction task. In (a) we present a sub-image of the original alloy. We compute the autocorrelation of this sub-image and threshold it. This operation gives us a detection map, like Algorithm \ref{alg:auto-similaritydetection}. In (b) the threshold is set to $90\%$ percent of the maximum value of the autocorrelation. Too many points are identified. In (d) the threshold is set to $99\%$ and only one point is identified. Correct lattice is identified in (c). } \label{fig:fourier_comp} \end{figure} \begin{figure} \centering \subfloat[]{\includegraphics[angle=90,width=.23\linewidth]{./img/lattice_left_zoom.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.5\linewidth]{./img/MailledeBase.pdf}} \hfill \caption{\figuretitle{Agreement with crystallography models} In (a) we perform a zoom on of the lattice identified in Figure \ref{fig:lattices_algo} and compare it to the one identified by crystallographists in (b). (a) is a zoomed rotated version of a crystalline structure similar to (b). The output lattice in (a) is the same as the one in (b). Indeed in (b) the red points, for instance, form a lattice. A possible basis for this lattice is given by the vectors of a parallelogramm. Up to rotation these basis vectors match the one identified in (a). However, the parallelogramm basis is a symmetric and thus is not chosen by chemists since it does not reflect the geometry of the alloy. The preferred basis is given by the symmetric rhombus (white edges in (b)). Image courtesy of Denis Gratias.} \label{fig:crystallo} \end{figure} \subsubsection{Natural images} \label{sec:natural_images} Identifying lattices in natural images is a more challenging task since we have to deal with image artifacts. In this section we investigate the effect on the detection of the background clutter in natural images, see Figure \ref{fig:preprocessing}, and the effect of the camera position, see Figure \ref{fig:homography}. \paragraph{Preprocessing} Due to the occlusions occurring in natural images, if a lattice is superposed over a real photograph, carefully selecting structural elements might not be enough in order to retrieve the periodicity. Indeed, if we observe a repetition of the lattice pattern, the background does not necessarily contain any repetition and thus makes the detection more complicated. In order to avoid such a problem we propose to introduce a preprocessing step in our algorithm. This preprocessing step will be encoded in a linear filter $h$. Suppose $U$ is a sample from a Gaussian model with function $f$ then $h * U$ is a sample from a Gaussian model with function~$h * f$. Thus all the properties derived earlier remain valid with this linear operation. In Figure~\ref{fig:preprocessing}, we set $h$ to be a Laplacian operator \footnote{We use a discrete Laplacian operator $\Delta$ such that for any $\veclet{x} = (x_1,x_2)$, we get that $\Delta(u)(x_1,x_2) = \left(u(x_1+1,x_2) + u(x_1-1,x_2) + u(x_1,x_2+1) + u(x_1,x_2-1) - 4 u(\veclet{x})\right)/4$, where boundaries are handled periodically.}. This operation allows us to avoid contrast problems. \begin{figure} \centering \subfloat[]{\includegraphics[width=.2\linewidth]{./img/fence_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.2\linewidth]{./img/fence_out.jpg}} \hfill \subfloat[]{\includegraphics[width=.2\linewidth]{./img/fence_laplacian_graph.jpg}} \hfill \subfloat[]{\includegraphics[width=.2\linewidth]{./img/fence_laplacian_out.jpg}} \caption{\figuretitle{Preprocessing and filtering} In (a) and (c) we display the graphs obtained with Algorithm \ref{alg:auto-similaritydetection} applied on images (b) and (d). In (b) and (d) the original image is superposed with the estimated lattice (vectors in cyan and proposed patches in red). In (a) and (b), $\operatorname{NFA}_{\text{max}}$ \ was set to $10^5$ which corresponds to 35 \% of detection in the associated \textit{a~contrario} \ model. Lower $\operatorname{NFA}_{\text{max}}$ \ did not give enough points to conduct the lattice proposal step. We obtain a visually satisfying lattice. In (c) and (d) we apply a simple preprocessing, a Laplacian filter, to the image and set $\operatorname{NFA}_{\text{max}}$ \ to 10. The detection figure is much cleaner and the estimation makes much more sense from a perceptual point of view. Note that, as in (b), the proposed lattice does not exactly match the fence periodicity. This is due to: 1) the initialization of the algorithm and the structure of the graph in the alternate minimization algorithm 2) the fact that the horizontal periodicity is broken by the black post.} \label{fig:preprocessing} \end{figure} \paragraph{Homography} In the previous experiments we suppose that the lattice structure was in front of the camera. In many cases this assumption is not true and there exists an homography that matches the deformed lattice in the image to a true lattice. Our algorithm makes the assumption that the lattice is viewed in a frontal way and fails otherwise. However, locally, this assumption is true and we can observe partial match of the lattices in Figure \ref{fig:homography}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.24\linewidth]{./img/anger_window_dec_graph.jpg}} \hspace{0.5cm} \subfloat[]{\includegraphics[width=.24\linewidth]{./img/anger_window_dec_out.jpg}} \\ \caption{\figuretitle{Homography and locality} In this experiment $\operatorname{NFA}_{\text{max}}$ \ was set to $10^3$. Note that the detected graph is localized around the original patch in (a). In (b) we superpose the proposed lattice onto the original image. The lattice proposal is valid in a small neighborhood around the original patch. However it is not valid for the whole image.} \label{fig:homography} \end{figure} \subsection{Texture ranking} \label{sec:texture-rank} We conclude these experiments by showing that this simple graphical model can be used to perform ranking among texture images, sorting them according to their degree of periodicity. We say that an image has high periodicity degree if a lattice structure can be well fitted to the image. We introduce a criterion for evaluating the relevance of the lattice hypothesis. Let $u$ be an image over~$\Omega$, let $\omega \subset \Omega$ be a patch domain and $a$ be as in Proposition \ref{prop:a_contrario_bound} with $\operatorname{NFA}_{\text{max}}$ \ set by the user. \begin{mydef}[Periodicity criterion] Let $\lbrace \veclet{t} \in \Omega, \ \mathcal{AS}(u,\veclet{t}, \omega) \leq a(\veclet{t}) \rbrace$ be the set of detected offsets and $N_{{\mathscr{C}}}$ its number of connected components as defined in Section \ref{sec:algorithm and properites}. Let also $(\widehat{B}, \widehat{M}, \widehat{\sigma})$ be the estimated parameters using Algorithm \ref{alg:alternate}. We define the following periodicity criterion $c_{per}$ as \begin{equation} c_{per}(u) = \frac{\pi \widehat{\sigma}^2}{N_{\mathscr{C}}\vertt{\operatorname{det}(\hat{b}_1,\hat{b}_2)}} \; ,\label{eq:cper}\end{equation} where $\widehat{B} = (\hat{b}_1, \hat{b}_2)$. \end{mydef} The criterion $c_{per}$ simply computes the ratio between the error area of Algorithm \ref{alg:alternate}, \textit{i.e.} \ the error made when considering the approximated lattice hypothesis, see Definition \ref{def:approximated_lattice_hypothesis}, and the area of the parallelogram defined by the output basis vectors. If we have enough detections this quantity is supposed to be small when the approximated lattice hypothesis holds and large when it does not. Nonetheless, we introduce a dependency in the number of detections. Indeed, even if no lattice is perceived, the hypothesis in Definition \ref{def:approximated_lattice_hypothesis} may still hold if the number of detected offsets is small In the experiment presented in Figure \ref{fig:ranking} we sort 25 texture images based on the $c_{per}$ criterion. Images are of size ${256 \times 256}$. Since the identified graph highly depends on the patch position and the patch size, for each image we uniformly sample 150 patch positions and set the patch size to ${20 \times 20}$. For each set of parameters we find a lattice using Algorithm \ref{alg:auto-similaritydetection} and Algorithm \ref{alg:alternate} with parameters $\text{$\operatorname{NFA}_{\text{max}}$} = 1$, $\delta_M = 10$, $\delta_B = 10^{-2}$ and $N_{it} =10$. A statistical study of our ranking is presented in Figure \ref{fig:stat_ranking}. Note that, from a perceptual point of view, from (a) to (n) all textures are periodic except for (f), (j) and (k) which are examples for which our algorithm fails. However, from (o) to (y), no texture is periodic. \begin{figure} \centering \subfloat[-9.75]{\includegraphics[width=.19\linewidth]{./img/bw/img_22.jpg}} \hfill \subfloat[-9.42]{\includegraphics[width=.19\linewidth]{./img/bw/img_01.jpg}} \hfill \subfloat[-9.12]{\includegraphics[width=.19\linewidth]{./img/bw/img_13.jpg}} \hfill \subfloat[-9.00]{\includegraphics[width=.19\linewidth]{./img/bw/img_18.jpg}} \hfill \subfloat[-8.80]{\includegraphics[width=.19\linewidth]{./img/bw/img_10.jpg}} \hfill \\ \subfloat[-8.24]{\includegraphics[width=.19\linewidth]{./img/bw/img_20.jpg}} \hfill \subfloat[-8.24]{\includegraphics[width=.19\linewidth]{./img/bw/img_05.jpg}} \hfill \subfloat[-7.99]{\includegraphics[width=.19\linewidth]{./img/bw/img_11.jpg}} \hfill \subfloat[-7.80]{\includegraphics[width=.19\linewidth]{./img/bw/img_26.jpg}} \hfill \subfloat[-7.77]{\includegraphics[width=.19\linewidth]{./img/bw/img_24.jpg}} \hfill \\ \subfloat[-7.74]{\includegraphics[width=.19\linewidth]{./img/bw/img_27.jpg}} \hfill \subfloat[-7.72]{\includegraphics[width=.19\linewidth]{./img/bw/img_14.jpg}} \hfill \subfloat[-7.47]{\includegraphics[width=.19\linewidth]{./img/bw/img_02.jpg}} \hfill \subfloat[-7.26]{\includegraphics[width=.19\linewidth]{./img/bw/img_06.jpg}} \hfill \subfloat[-7.21]{\includegraphics[width=.19\linewidth]{./img/bw/img_09.jpg}} \hfill \\ \subfloat[-7.20]{\includegraphics[width=.19\linewidth]{./img/bw/img_25.jpg}} \hfill \subfloat[-7.19]{\includegraphics[width=.19\linewidth]{./img/bw/img_12.jpg}} \hfill \subfloat[-7.17]{\includegraphics[width=.19\linewidth]{./img/bw/img_29.jpg}} \hfill \subfloat[-6.92]{\includegraphics[width=.19\linewidth]{./img/bw/img_08.jpg}} \hfill \subfloat[-6.86]{\includegraphics[width=.19\linewidth]{./img/bw/img_28.jpg}} \hfill \\ \subfloat[-6.78]{\includegraphics[width=.19\linewidth]{./img/bw/img_16.jpg}} \hfill \subfloat[-6.65]{\includegraphics[width=.19\linewidth]{./img/bw/img_17.jpg}} \hfill \subfloat[-6.56]{\includegraphics[width=.19\linewidth]{./img/bw/img_15.jpg}} \hfill \subfloat[-6.30]{\includegraphics[width=.19\linewidth]{./img/bw/img_21.jpg}} \hfill \subfloat[-6.16]{\includegraphics[width=.19\linewidth]{./img/bw/img_07.jpg}} \hfill \\ \caption{\figuretitle{Texture ranking} The $c_{per}$ criterion, defined in \eqref{eq:cper}, is computed for each setting. We associate to each image the median of the 150 criterion values and sort the images accordingly. (a) corresponds to the lowest criterion, \textit{i.e.} \ the most periodic image according to $c_{per}$ criterion. (y) corresponds to the largest criterion, \textit{i.e.} \ the least periodic image according to $c_{per}$. Under each image we give the logarithm of the median $c_{per}$ values.} \label{fig:ranking} \end{figure} \epstopdfsetup{outdir=./img/} \begin{figure} \centering \includegraphics[width=.5\linewidth]{./img/boxplot-eps-converted-to.pdf} \caption{\figuretitle{Boxplot for $c_{per}$ values} In this figure we present a boxplot of the $c_{per}$ values, defined in \eqref{eq:cper}, used to rank textures images in Figure \ref{fig:ranking}. We recall that we use 150 random patch positions in order to compute the $c_{per}$ values. Letters on the $x$-axis correspond to the textures in Figure \ref{fig:ranking}. For each texture we present its median $c_{per}$ value. The lower, respectively upper, limit of the blue box corresponds to $25\%$, respectively $75\%$ of the computed $c_{per}$ values. The dashed line corresponds to the confidence interval with level $0.07$ under normality assumption. Points outside this interval are plotted in red and the graphics was clipped between $0$ and $5 \times 10^{-3}$. The size of the confidence interval grows with the median value. It must be noted that the overlapping of the blue boxes might explain some inconsistencies of our ranking. Another source of errors lie in the model which assumes that if a texture is periodic its pattern is described by a $20 \times 20$ patch. In order to perform a more robust ranking a multiscale approach should be preferred.} \label{fig:stat_ranking} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper we introduce a statistical model, the \textit{a~contrario} \ framework, to analyze spatial redundancy in images. We propose a general algorithm for detecting redundancy in natural images. It relies on Gaussian random fields as background models and takes advantage of the links between the $\ell^2$ norm and Gaussian densities. The \textit{a~contrario} \ formulation provides us with a statistically sound way of thresholding distances in order to assess similarity between patches. In this rationale we replace the task of manually setting thresholds by the selection of a Number of False Alarms. We illustrate our contribution with three examples in various domains of image processing. Introducing a simple modification of the NL-means algorithm we show that similarity detection (in this case, dissimilarity detection) in a theoretical \textit{a~contrario} \ framework can easily be embedded in any image denoising pipeline. For instance, the threshold we introduced could be integrated into the Non-Local Bayes algorithm \cite{lebrun2013nonlocal} in order to estimate mean and covariance matrices with probabilistic guarantees. The generality of our model allows for several extensions for non-Gaussian noises \cite{deledalle2009iterative} or to take into account the geometry of the patch space~\cite{houdard2017high, wang2013sure}. Turning to periodicity detection we propose a novel graphical model using the output of Algorithm \ref{alg:auto-similaritydetection} in order to extract lattices from images. In this model, lattice extraction is formulated as the maximization of some log-likelihood defined on a graph. We prove the finite-time convergence of Algorithm \ref{alg:alternate . We provide image experiments illustrating the role of the hyperparameters in our study and we assess the importance of selecting adaptive Gaussian random fields as background models. We remark that the expected number of false alarms parameter is linked to the choice of the input patch and give a range of possible values for $\operatorname{NFA}_{\text{max}}$ \ settings. We also illustrate its possible application in crystallography as it correctly identifies underlying lattices in alloys. This rationale could be used to identify symmetry groups (wallpaper groups) in alloys, following the work of \cite{liu2004computational}. Finally our method is tested on natural images where some of its limits such as perspective defect or sensitivity to occlusion phenoma are identified. It must be noted that our method could easily be extended to color images by considering $\mathbb{R}^3$-valued instead of real-valued images. Our last application consists in giving a quantitative criterion for periodicity texture ranking. This criterion is based on the parameters estimated in Algorithm \ref{alg:alternate}. Since we set our background models to be Gaussian random fields and remarking that these are good microtexture approximations we wish to explore the possibility to embed our \textit{a~contrario} \ framework in texture analysis and texture synthesis algorithms. For instance an \textit{a~contrario} \ methodology could be incorporated in the algorithm proposed by Raad et al. in \cite{raad2015conditional}. Another potential direction is to look at the behavior of the introduced dissimilarity functions for more general random fields in order to handle more complex and structured situations such as parametric texture synthesis. \section{Acknowledgements} The authors would like to thank Denis Gratias for the crystallography images, Jérémy Anger for some of natural images, Axel Davy who provided an OpenCL implementation of the NL-means algorithm and Thibaud Ehret for its insights and comments on denoising algorithms. \section{An a contrario framework for auto-similarity} \label{sec:similarity functions} We first introduce a notion of dissimilarity between patches of an input image. \begin{mydef}[Auto-similarity] Let $u$ be an image defined over a domain $\Omega = \llbracket 0,M-1 \rrbracket^2 \subset \mathbb{Z}^2$, with $M \in \mathbb{N} \backslash \{ 0\}$. Let $\omega \subset \mathbb{Z}^2$ be a patch domain. We introduce ${P_{\omega}(u)= (\dot{u}(\veclet{y}))_{\veclet{y} \in \omega}}$ the patch at position $\omega$ in the periodic extension of $u$ to $\mathbb{Z}^2$, denoted by~$\dot{u}$. We define the auto-similarity with patch domain $\omega$ and offset $\veclet{t}\in \mathbb{Z}^2$ by \begin{equation} \mathcal{AS}(u,\veclet{t},\omega) = \norm{P_{\veclet{t}+\omega}(u) - P_{\omega}(u)}_2^2 \; . \end{equation} \label{def:autosim} \end{mydef} The auto-similarity computes the distance between a patch of $u$ defined on a domain $\omega$ and the patch of $u$ defined by the domain $\omega$ shifted by the offset vector $\veclet{t}$ In what follows, we introduce an \textit{a~contrario} \ framework on the auto-similarit . This framework will allow us to derive an algorithm for detecting spatial redundancy in natural images. \label{sec:a_contrario_framework} In this section we fix an image domain $\Omega \subset \mathbb{Z}^2$ and a patch domain $\omega \subset \Omega$. We recall that our final aim is to design a criterion that will answer the following question: are two given patches similar? This criterion will be given by the comparison between the value of a dissimilarity function and a threshold $a$. We will define the threshold $a$ so that few similarities are identified in the null hypothesis model, \textit{i.e.} \ similarity does not occur ``just by chance''. Thus we can reformulate the initial question: is the similarity output of a dissimilarity function between two patches small enough? Or, to be more precise, how can we set the threshold $a$ in order to obtain a criterion for assessing similarity between patches? This formulation agrees with the \textit{a~contrario} \ framework \cite{desolneux2007gestalt} which states that geometrical and/or perceptual structure in an image is meaningful if it is a rare event in a background model. This general principle is sometimes called the Helmholtz principle \cite{zhu1999embedding} or the non-accidentalness principle \cite{lowe2012perceptual}. Therefore, in order to control the number of similarities identified in the background model, we study the probability density function of the auto-similarity function with input random image $U$ over $\Omega$. We will denote by $\mathbb{P}_0$ the probability distribution of $U$ over $\mathbb{R}^{\Omega}$, the images over $\Omega$. We will assume that $\mathbb{P}_0$ is a microtexture model, see Definition~\ref{def:microtexture} below for a precise definition of such a model. We define the following significant event which encodes spatial redundancy: $\mathcal{AS}(u,\veclet{t},\omega) \leq a(\veclet{t})$, where $a$, the threshold function, is defined over the offsets ($\veclet{t} \in \mathbb{Z}^2$) but also depends on other parameters such as $\omega$ or $\mathbb{P}_0$ . The dependency of $a$ with respect to $\veclet{t}$ cannot be omitted. For instance, even in a Gaussian white noise $W$, the probability distribution function of $\mathcal{AS}(W, \veclet{t}, \omega)$ depends on $\veclet{t}$. The Number of False Alarms ($\operatorname{NFA}$ ) is a crucial quantity in the \textit{a~contrario} \ methodology. A false alarm is defined as an occurrence of the significant event in the background model~$\mathbb{P}_0$. We recall that in our model the significant event is patch redundancy. This test must be conducted for every possible configurations of the significant event, \textit{i.e.} \ in our case we test every possible offset $\veclet{t}$. The $\operatorname{NFA}$ \ is then defined as the expectation of the number of false alarms over all possible configuration . Bounding the $\operatorname{NFA}$ \ ensures that the probability of identifying $k$ offsets with spatial redundancy is also bounded, see Proposition \ref{prop:a_contrario_bound}. In what follows we give the definition of the $\operatorname{NFA}$ \ in the spatial redundancy context. \begin{mydef}[$\operatorname{NFA}$] Let $U \sim \mathbb{P}_0$, where $\mathbb{P}_0$ is a background microtexture model. We define the auto-similarity probability map $\mathsf{AP}$ for any $\veclet{t} \in \Omega$, $\omega \subset \Omega$ and $a \in \mathbb{R}^{\Omega}$ by \begin{equation}\mathsf{AP}(\veclet{t},\omega, a) = \prob[0]{ \mathcal{AS}(U,\veclet{t},\omega) \leq a(\veclet{t})} \label{eq:def_autoprob} \; .\end{equation} We define the auto-similarity expected number of false alarms $\mathsf{ANFA}$ by \begin{equation} \label{eq:NFA} \mathsf{ANFA}(\omega, a) = \sum_{\veclet{t} \in \Omega} \mathsf{AP}(\veclet{t}, \omega, a) \; \end{equation} \label{def:NFA} \end{mydef} Note that $\mathsf{AP}(\veclet{t}, \omega, a)$ corresponds to the probability that $\omega + \veclet{t}$ is similar to $\omega$ in the background model $U$. For any $\veclet{t} \in \Omega$, the cumulative distribution function of the auto-similarity random variable $\mathcal{AS}(U,\veclet{t},\omega)$ under $\mathbb{P}_0$ evaluated at value $\alpha(\veclet{t})$ is given by $\mathsf{AP}(\veclet{t},\omega,\alpha(\veclet{t}))$. We denote by ${q \mapsto \mathsf{AP}^{-1}(\veclet{t},\omega,q)}$ the inverse cumulative distribution function, potentially defined by a generalized inverse ($ \mathsf{AP}^{-1}(\veclet{t},\omega,q) = \inf \{\alpha(\veclet{t}) \in \mathbb{R}, \ \mathsf{AP}(\veclet{t}, \omega, \alpha(\veclet{t})) \geq q \}$), of the auto-similarity random variable for a fixed offset $\veclet{t}$, with $q \in (0,1)$ a quantile. We now have all the tools to control the number of detected offsets in the background model. \begin{mydef}[Detected offset] Let $u \in \mathbb{R}^{\Omega}$ be an image, $\omega \subset \Omega$ a patch domain, and $a \in \mathbb{R}^{\Omega}$. An offset $\veclet{t}$ is said to be detected with respect to $a$, if $\mathcal{AS}(u,\veclet{t}, \omega) \leq a(\veclet{t})$. \label{def:detec_offset} \end{mydef} Note that a detected offset in $U \sim \mathbb{P}_0$ corresponds to a false alarm in the \textit{a~contrario} \ model. In what follows we suppose that the cumulative distribution function of $\mathcal{AS}(U,\veclet{t}, \omega)$ is invertible for every $\veclet{t} \in \Omega$. This ensures that for any $\veclet{t} \in \Omega$ and $q \in (0,1)$ we have \begin{equation} \label{eq:invertibility} \mathsf{AP}\left(\veclet{t}, \omega, \mathsf{AP}^{-1}\left(\veclet{t},\omega, q\right)\right) = q \; . \end{equation} \begin{prop} \label{prop:a_contrario_bound} Let $\operatorname{NFA}_{\text{max}} \geq 0$ and for all $\veclet{t} \in \Omega$ define $ a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, \operatorname{NFA}_{\text{max}} / |\Omega|\right)$. We have that for any $n \in \mathbb{N} \without{0}$, \begin{equation*} \mathsf{ANFA}(\omega, a) = \operatorname{NFA}_{\text{max}} \quad \text{and} \quad \prob[0]{ \text{\quotem{at least $n$ offsets are detected in $U$}}} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \;. \end{equation*} \end{prop} \begin{proof} Using \eqref{eq:NFA}, and $a(\veclet{t}) = \mathsf{AP}^{-1}\left(\veclet{t}, \omega, \operatorname{NFA}_{\text{max}} / |\Omega|\right)$, we get \[ \mathsf{ANFA}(\omega, a) = \summ{\veclet{t} \in \Omega}{}{\mathsf{AP}(\veclet{t},\omega, a)} = \summ{\veclet{t} \in \Omega}{}{\mathsf{AP}\left(\veclet{t}, \omega, \mathsf{AP}^{-1}\left(\veclet{t},\omega, \operatorname{NFA}_{\text{max}} / \vertt{\Omega}\right)\right)} = \operatorname{NFA}_{\text{max}} \; , \] where the last equality is obtained using \eqref{eq:invertibility}. Concerning the upper-bound, we have, using the Markov inequality and \eqref{eq:def_autoprob}, for any $n \in \mathbb{N} \without{0}$ \begin{align*} \prob[0]{ \text{\quotem{\small at least $n$ offsets are detected in $U$}}} &= \prob[0]{\sum_{\veclet{t} \in \Omega}{}{\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})}} \ge n} \\ &\leq \frac{\sum_{\veclet{t} \in \Omega}{}{\expec{\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})}}}}{n} \leq \frac{\operatorname{NFA}_{\text{max}}}{n} \; , \end{align*} where $\mathbb{1}_{\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})} = 1$ if $\mathcal{AS}(U, \veclet{t}, \omega) \leq a(\veclet{t})$ and $0$ otherwise. \end{proof} Thus, setting $a$ as in Proposition \ref{prop:a_contrario_bound}, we have that an offset $\veclet{t} \in \Omega$ is detected for an image~$u \in \mathbb{R}^{\Omega}$ i \begin{equation}\mathcal{AS}(u,\veclet{t},\omega) \leq \mathsf{AP}^{-1}\left(\veclet{t},\omega, \operatorname{NFA}_{\text{max}} / \vertt{\Omega}\right) \; . \label{eq:icdf_ineq}\end{equation} This \textit{a~contrario} \ detection framework can then be simply rewritten as 1) computing the auto-similarity function with input image $u$, 2) thresholding the obtained dissimilarity map with the inverse cumulative distribution function of the computed dissimilarity function under $\mathbb{P}_0$. The computed threshold depends on the offset and Proposition \ref{prop:a_contrario_bound} ensures probabilistic guarantees on the expected number of detections under $\mathbb{P}_0$. Using the inverse property of the inverse cumulative distribution function and \eqref{eq:icdf_ineq}, we obtain that an offset is detected if and only if \begin{equation}\prob[0]{\mathcal{AS}(U,\veclet{t},\omega) \leq \mathcal{AS}(u,\veclet{t},\omega)}= \mathsf{AP}\left(\veclet{t}, \omega, \mathcal{AS}(u,\veclet{t},\omega)\right) \leq \operatorname{NFA}_{\text{max}} /\vertt{\Omega} \; . \label{eq:true_detec}\end{equation} Therefore, the thresholding operation can be conducted either on $\mathcal{AS}(u,\veclet{t}, \omega)$, see \eqref{eq:icdf_ineq}, or on $\mathsf{AP}\left(\veclet{t}, \omega, \mathcal{AS}(u,\veclet{t},\omega)\right)$, see \eqref{eq:true_detec}. This property will be used in Section \ref{sec:detection-algorithm} to define a similarity detection algorithm based on the evaluation of $\mathcal{AS}(u,\veclet{t}, \omega)$
{ "timestamp": "2019-04-16T02:03:19", "yymm": "1904", "arxiv_id": "1904.06428", "language": "en", "url": "https://arxiv.org/abs/1904.06428" }
\section{Introduction} \label{I} Many inherent features of phase transitions in many particle systems can be understood and quantitatively described by the analysis of an interplay between entropy and energy. Taking magnetic ordering as an example, energy-entropy interplay allows one to explain the absence of spontaneous magnetisation at low dimensions \cite{Ruelle1968,Landau} or the influence of structural (topological) disorder on magnetic ordering\cite{zittartz_1,zittartz_2,krasnytska1,Krasnytska13}. Therefore, much attention has been paid to the analysis of ordering phenomena in many-particle systems by introducing models that give direct access to trigger the system's entropy in a countable way. One of them is the recently introduced Potts model with invisible states \cite{Tamura10,Tamura11,Tanaka11a}. Unlike standard $q-$state Potts model, this modification possesses additional $r$ invisible states. If a spin lies in one of these invisible states, it does not interact with the rest of the system. Thus introducing invisible states does not change the interaction energy, but rather the number of configurations, or equivalently - the entropy. This model was originally suggested to explain why the phase transition with the $q-$fold symmetry breaking undergoes a different order than predicted theoretically \cite{Tamura10,Tamura11,Tanaka11a}. Analysis of this model on different lattices has been a subject of intensive analytic \cite{Johnston13,Mori12,Enter11a,Enter11b,Ananikian13,Sarkanych17,Sarkanych18} and numerical \cite{Tamura10,Tamura11,Tanaka11a} studies. It has been shown that the number of invisible states ($r$) plays the role of a parameter, whose increase makes the phase transition sharper. For example the $q=2$ model with $r=30$ invisible states on a square lattice undergoes a first order phase transition, while $q=2$ and $r=0$ correspond to the ordinary Ising model, which is a textbook example of a second order phase transition \cite{Tamura10}. Interesting phenomena were observed for the Potts model with invisible states when it is considered on a complete graph \cite{Krasnytska16}. In the region $1\leq q< 2$ it possesses untrivial critical behaviour \cite{Krasnytska16}: for small values of $r$ the system undergoes only second order phase transition. For large $r$ there is only a first order phase transition. In-between there is a region where both phase transitions occur at different temperatures. Thus, the phase diagram is characterised by two marginal values: $r_{c1}$ - where the first order phase transition appears, and $r_{c2}$ - where the second order phase transition disappears. In the Ising case $q=2$ the phase diagram is characterised by one critical value $r_c\simeq3.62$, which separates regions with first and second order phase transitions. In this paper we consider the above described model on a complex network \cite{networks_1}-\cite{networks_5}, being primarily interested in the Ising case $q=2$. Much attention has been paid to the study of phase transitions on complex networks \cite{Dorogovtsev08}. Besides pure academic interest, such problems have a number of practical motivations, ranging from sociophysics, where the structure of social interations is properly described by a network topology to nanophysics, where network reflects structure of particle agregates. Of particular interest are the scale-free networks, where the node-degree distribution (the probability of a randomly chosen vertex to have degree $k$) is governed by a power-law decay: \begin{equation} \label{pofk} P(k)\sim 1 /{k^\lambda}, \, k \to \infty \, . \end{equation} It has been shown that many standard models of statistical physics manifest unusual features when considered on scale-free networks \cite{Krasnytska13,Leone02,Dorogovtsev02,Palchykov10,Igloi02}. In particular, it has been found that the decay exponent $\lambda$ determines the collective behaviour \cite{Leone02,Dorogovtsev02} and its continuous change plays a similar role as space dimensionality for lattice systems \cite{Holovatch92,Holovatch98}. In particular, the Ising model on a scale-free network is characterized by the lower and upper critical values of $\lambda$: below $\lambda = 3$ the system is ordered at any finite temperature, above $\lambda = 5$ the system is governed by the usual mean field critical exponents, whereas in the intermediate region the critical exponents become $\lambda$-dependent \cite{Dorogovtsev08,Leone02,Dorogovtsev02}. Moreover, logarithmic corrections to scaling appear at $\lambda=5$ \cite{Palchykov10}. In turn, for the standard $q$-state Potts model on a scale free network the values of $\lambda$ and $q$ determine the order of phase transition \cite{Krasnytska13,Igloi02,Krasnytska14}. Being introduced rather recently, the Potts model with invisible states has not yet been a subject of analysis on a scale-free network. Although its analysis on a complete graph brings about rather unexpected critical behaviour \cite{Krasnytska16}. Therefore, it is tempting to perform such a study to analyse the combined impact of different factors that allow to trigger an amount of disorder in many-particle system. Moreover, considering the model with invisible states on a scale free network allows one to study within the unique approach an interplay of different forms of disorder: one arising from the number of configurations of the internal degrees of freedom (number of invisible states $r$) and another one arising from structural inhomogeneities `hubs' (node degree distribution exponent $\lambda$). The rest of the paper is organised as follows: in Section \ref{II} we apply mean-field approach to find the free energy; having this result to hand we proceed with numerical analysis in Section \ref{III}; we draw conclusions in Section \ref{IV}. \section{Model and mean-field approximation} \label{II} The Hamiltonian for the Potts model with invisible states reads, see \cite{Krasnytska16}: \begin{equation}\label{1a} - H(q,r)=\sum_{<i,j>}J _{ij}\sum_{\alpha=1}^q \delta _{S_i,\alpha}\delta _{\alpha,S_j}+ h\, \sum _{i=1}^N \delta_{S_i,1}, \end{equation} where $s_i=(1,...,q,q+ 1,...,q+r)$ is the Potts variable, $q$ and $r$ are the numbers of visible and invisible states respectively, $\delta_{\alpha,S_j}$ is Kronecker delta symbol and an external magnetic field $h$ is introduced to favour the first visible state. The first summation in (\ref{1a}) is performed over all pairs of spins in the network of $N$ nodes, the second sum requires both of the interacting spins to be in the same visible state. Considering this model on the network, one assumes the couplings $J_{ij}$ to be in the form of network adjacency matrix: $J_{ij}=1$ if node $i$ and $j$ are connected and $J_{ij}=0$ otherwise. All further analytical results will be performed for general $q$ and numerically evaluated for the Ising case $q=2$. For the purpose of our analysis we adopt a variant of the mean-field approach presented in Refs. \cite{Krasnytska13,Igloi02}. Let us introduce the local thermodynamic averages: \begin{eqnarray}\label{2'} \langle \delta _{S_i,\alpha} \rangle=\left\{ \begin{array}{ccc} & \mu_i \, , & \alpha=1, \\ & \nu_{1i} \, , & \alpha=2,..q, \\ & \nu_{2i}\, , & \alpha=q+1,...r\, , \end{array} \right. \end{eqnarray} where the thermodynamic averaging is performed with the Hamiltonian (\ref{1a}). The normalization condition \begin{equation}\label{3'} \mu_i+(q-1)\nu_{1i}+r\nu_{2i}=1 \end{equation} enables one to construct two independent local order parameters. Taking into account low- and high-temperature asymptotics of the averages (\ref{2'}) and (desired) asymptotics for the order parameters, see Table \ref{tab1}, one can define two local order parameters by \begin{equation}\label{4'} m_{1i}=\mu_i-\nu_{1i}, \quad m_{2i}=\mu_i-\nu_{2i}. \end{equation} \begin{table}[b] \caption{Low and high temperature asymptotics of the thermodynamic averages, Eq.~(\ref{2'}), and order parameters, Eq. (6). \label{tab1}} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $\beta \rightarrow \infty$ & $\mu=1$ & $\nu_{1}=0$ & $\nu_{1}=0$ & $m_1=1$ & $m_2=1$ \\ \hline \hline $\beta\rightarrow 0$ & $\mu=\frac{1}{q+r}$ & $\nu_{1}=\frac{1}{q+r}$ & $\nu_{1}=\frac{1}{q+r}$ & $m_1=0$ & $m_2=0$ \\ \hline \end{tabular} \end{center} \end{table} Using definitions for the local averages (\ref{2'}) and neglecting second order contributions from the fluctuations $\delta _{S_i,\alpha} - \langle \delta _{S_i,\alpha} \rangle$ we arrive at the mean-field Hamiltonian: \begin{eqnarray}\label{9'} - H(q,r)=\sum_{<i,j>}J_{ij}[\mu_i(2\delta _{1,S_j}-\mu_j)+\\ \nonumber \sum_{\alpha=2}^q (2\delta _{\alpha,S_i}-\nu_{1i})\nu_{1j}]+h\sum_i \delta_{S_i,1}. \end{eqnarray} Taking trace of (\ref{9'}) over all possible configurations of spins we obtain free energy per site: \begin{eqnarray} \label{f'} f(\mu,\nu_1)=\sum_j J_{ij}(\mu_i\mu_j+(q-1)\nu_{1i}\nu_{1j})-\\ \nonumber \frac{1}{\beta}\sum_i \ln\Big( e^{\beta(h+2\sum_j J_{ij}\mu_j)}+(q-1)e^{2\beta\sum_j J_{ij}\nu_{1j}}+r\Big). \end{eqnarray} Within the mean-field approach we also consider a coupling constant proportional to the probability of two nodes being connected \begin{equation}\label{11a} J_{ij}=Jp_{ij}=\frac{J k_i k_j}{N\bar{k}}, \end{equation} where $k_i$ stands for the degree of node $i$ and $\bar{k}$ is average node degree in the network. We also introduce a global weighted order parameters according to the rules \begin{equation}\label{11b} m_1=\frac{\sum_i k_i m_{1i}}{\sum_i k_i}, \hspace{2cm} m_2=\frac{\sum_i k_i m_{2i}}{\sum_i k_i}. \end{equation} With all these substitutions in the thermodynamic the limit free energy per site as a function of global order parameters reads \begin{dmath} \label{ff0} f(m_1,m_2)=\frac{J\langle k\rangle}{(q+r)^2}\Big((rm_2+1+(q-1)m_1)^2+ (q-1)(rm_2+1-(r+1)m_1)^2\Big)-\frac{1}{\beta}\int_2^\infty dk P(k) \ln\Big(e^{\beta(h+\frac{kJ}{q+r}(m_1(q-1)+1+rm_2))}+(q-1)e^{\frac{\beta Jk}{q+r}(m_2r+1-(r+1)m_1)}+r\Big), \end{dmath} where $P(k)$ is a node degree distribution. Henceforth we will only consider a scale-free network that is governed by a power law decay (\ref{pofk}). Eq. (\ref{ff0}) gives the free energy as a function of two order parameters $m_1$ and $m_2$ with a set of parameters $q,r,\beta,\lambda$. We will also set $J=1$ so the temperature is measured in these energy units. Usually in such a case, the next step is to present the free energy as a power series over order parameters (Landau free energy). In our case two order parameters make this expansion too cumbersome for a direct analytic treatment and we switch to numerical analysis of the free energy. For this purpose we adopt simplex method \cite{NelderMead}. Its advantage is that it does not require to know the derivatives of the function and only needs a way to evaluate it. With this numerical technique to hand in the following Section we proceed according to the following scheme. For fixed values of $q, r$ and $\lambda$ we sweep through a certain region of temperatures and calculate the values of $m_1$ and $m_2$ which minimise the free energy. Based on the temperature behaviour of the order parameters we can then make conclusions about the order of the phase transition, critical temperature and critical exponents. \section{Results} \label{III} In this section we investigate the Ising model $(q=2)$ with an arbitrary number of invisible states $r$ near the spontaneous phase transition point ($h=0$) on a scale-free network. All our results will be compared with the analytic results known in the limit $r=0$ \cite{Leone02,Dorogovtsev02,Palchykov10}. This particular case of the Ising model on a scale-free network hereafter is called the "genuine" Ising model. We will mostly be interested in the region $3\leq \lambda\leq5$, where $\lambda$ dependent critical exponents were observed. For the invisible states Ising model on a scale-free network, one would expect that the node degree distribution exponent has a similar effect as for the genuine Ising model. Indeed, our analysis supports this conjecture. In particular, for low values of $\lambda\leq 3$ the system remains ordered at any finite temperature. However, the region $\lambda\geq 3$ appears to exhibit some not-trivial features we will discuss in more details below. Let us start with an analysis of the critical temperature $T_c$. At critical temperature any ordering disappears. This is found when $m_1=0$. Earlier it was shown, that on a complete graph $m_2$ vanishes only at infinite temperature, thus only the first order parameter can be used to determine the order of the phase transitions and the critical temperature. In Fig. \ref{fig1} the critical temperature of the Ising model with invisible states on a scale free network is given as a function of $\lambda$ for various numbers of invisible states $r$ ranging from 0 to 60. Critical temperatures obtained with our numerical technique are in a good agreement with analytical results for the genuine Ising model (see the upper solid and dashed lines in Fig. \ref{fig1}) \cite{Dorogovtsev02}. From the plot, it is clear that the critical temperature decreases with an increase of $\lambda$. When $\lambda$ decreases below the marginal value $\lambda=3$, no finite temperature can break spontaneous ordering: the system remains ordered at any $T$. This reflects the fact, that for small $\lambda$ there are many nodes with high degree (hubs), making the network strongly connected. Taking this into account, in the limit $\lambda\to 3+0$, critical temperature rises to $T_c\to\infty$. \begin{figure} \includegraphics[width=\columnwidth]{tc.pdf} \caption{\label{fig1} Critical temperature of the Ising model with invisible states on a scale free network as a function of the degree distribution exponent $\lambda$ for different values of $r$: $r=0,5,10,15,20,30,40,50,60$ going down the plot. Dashed line represents analytical results for the genuine Ising case \cite{Leone02,Dorogovtsev02,Palchykov10}.} \end{figure} On the other hand, from Fig. \ref{fig1} one can also say, that the critical temperature decreases with an increase in the number of invisible states. This is because $r$ regulates the entropy of the system, meaning that the more entropy there is, the easier it is to break the ordering. Limit $r\to\infty$ will reproduce results for non-interacting system, i.e. $T_c=0$ \cite{Krasnytska16}. The next step is to analyse the behaviour of the order parameters. Continuous phase transitions are described by continuous dependencies of the order parameters on temperature. Alternatively, if the function $m_1(T)$ has a gap, this signals that there is a jump between two different states of the system, which we will associate with a first order phase transition. As an example, in Fig. \ref{fig3} we present order parameter dependencies on reduced temperature $\tau=T/T_c$ for fixed value $\lambda=3.8$ and various values of $r$. \footnote{Hereafter we are using the value $\lambda=3.8$ to illustrate typical properties of the system, which remain qualitatively the same throughout the region $3<\lambda<5$.} Qualitatively similar behaviour is observed in the whole region $3<\lambda<5$. It is worth noting that in case $r=0$ there is only one order parameter, as in the genuine Ising model. As one can see from these plots, $m_2$ does not vanish at criticality and slowly decays as temperature is rising. The same behaviour was observed on the complete graph \cite{Krasnytska16}. \begin{figure} \includegraphics[width=\columnwidth]{m1t.pdf} \includegraphics[width=\columnwidth]{m2t.pdf} \caption{\label{fig3} Order parameters as functions of reduced temperature $\tau=T/T_c$ for various values of $r$ and fixed $\lambda=3.8$. Different values of $r$ lead to different critical regimes.} \end{figure} The system undergoes a second order phase transition for small amounts of invisible states. While for the large numbers of invisible states the phase transition is discontinuous. However, there is a region where two transitions occur at different temperatures: at lower temperature $T^*$ there is a jump in the order parameter (which we associate with first order phase transitions), and, later, at higher temperature remaining ordering completely vanishes. Similar behaviour was previously observed in Ref. \cite{Krasnytska16} for the complete graph, but in the region $1\leq q <2$ only, while limiting case $q=2$ showed sharp distinction between different orders regimes. Topological disorder changes critical behaviour. Even in the Ising case, it is characterised by two marginal values $r_{c1}$ and $r_{c2}$. In Fig. \ref{fig3_2} we show phase diagram in $(T,r)-$plane for the fixed value $\lambda=3.8$. Lower (blue) and upper (yellow) lines represent first and second order phase transition lines respectively. \begin{figure} \includegraphics[width=\columnwidth]{lif-plot.pdf} \caption{\label{fig3_2} Phase transitions temperatures $T^*$ and $T_c$ as a function $r$ for fixed $\lambda=3.8$. Two solid lines represent first and second order phase transition temperatures; two dashed vertical lines shows marginal values of $r$ and limit the region of the coexistence of two phase transitions.} \end{figure} The second order phase transition line with the first order phase transition line divide $(T,r)-$plane into three regions. Below the lower (blue) line, the system is in a ordered state, while above the upper (yellow) line - the state is fully disordered. In the region between the lines, the system is characterised by residual ordering. Therefore, at $r_{c2}$ and $T_c$ these three phases coincide, making this point tricritical. Two vertical lines mark marginal values $r_{c1}$ and $r_{c1}$, or equivalently the region, where two phase transitions coexist. For each $\lambda$ value there are two marginal values $r_{c1}(\lambda)$ and $r_{c2}(\lambda)$. These two values divide the $(r,\lambda)-$plane into three regions with different critical behaviours. The next step is to analyse properties of the second order phase transition. With order parameters as function of temperature, it is easy to find critical exponent $\beta$, which is given by: \begin{equation} m_1\sim\left(\frac{T_c-T}{T_c}\right)^\beta. \label{17} \end{equation} Since we are minimising free energy numerically, the only way for us to proceed with the definition (\ref{17}) is to fit obtained values $m_1(T)$. Because critical exponents are only defined at the critical temperature, from the fit we only obtain effective value $\beta_{\rm eff}$. In Fig \ref{fig5} we show critical exponent $\beta_{\rm eff}$ for different values of $r$. For the genuine Ising case in the region we are interested in, critical exponents are $\lambda-$dependent. Analytical results yield \cite{Leone02,Dorogovtsev02,Palchykov10}: \begin{equation}\label{exp} \beta(\lambda)=1/(\lambda-3)\, . \end{equation} In the plot we consider $\lambda=3.8$, thus the theoretical prediction is $\beta(3.8)=1.25$. This value is shown by a solid horizontal line. We can see that regardless of $r$, a second order phase transition is characterised by the same critical exponent. A slight tendency to increase is due to the fact that the effective value of critical exponent is strongly dependent on the region we use for fitting. The smaller the region is, the better fit the first order parameter dependency on temperature can be explained by a single power-law \ref{17}. However, with the increase of $r$, the region has to become even smaller, making it much harder to perform numerical calculations very close to the critical temperature. \begin{figure} \includegraphics[width=\columnwidth]{beta_r_.pdf} \caption{\label{fig5} Critical exponent $\beta$ as a function of number of invisible states $r$ for fixed $\lambda=3.8$. Solid line shows the exact result for the genuine Ising model $\beta=\frac1{\lambda-3}|_{\lambda=3.8}=1.25$, Eq. (\ref{exp}).} \end{figure} In Fig. \ref{fig4} the phase diagram in the $(r,\lambda)-$plane is shown. It is characterised by two lines $r_{c1}(\lambda)$ and $r_{c2}(\lambda)$. Below the first one, with temperature raising, the order parameter continuously changes until it vanishes. Above the $r_{c2}(\lambda)$ line, only the first order phase transition occurs, meaning that as temperature increases, the order parameter decreases, and at $T_c$ it abruptly drops to zero (see Fig. \ref{fig3} for $r=30$ case). The system undergoes two phase transitions in the region between the lines. At $T^{*}<T_c$ first order phase transition occur, and there is a jump between two non-zero values of $m_1$. Then, at the second order phase transition temperature $T_c$ first order parameter vanishes. \begin{figure} \includegraphics[width=\columnwidth]{phasediag.pdf} \caption{\label{fig4} Phase diagram of the Ising model with invisible states. Three regions, presented here, differ in critical behaviour. In the lower (blue) region system possesses only second order phase transition; in the region in-between the lines there are both first and second order phase transitions at different temperatures; in upper region (yellow) only the first order phase transition occurs.} \end{figure} For fixed value of $\lambda$ and $r$ below the $r_{c1}(\lambda)$ line, $m_1$ depends on temperature continuously. Then the jump in the order parameter appears. In the region $r_{c1}(\lambda)<r\leq r_{c2}(\lambda)$ discontinuity grows with $r$ and first order phase transition approaches second order phase transition $T^{*}\to T_c$. When the $r_{c2}(\lambda)$ line is crossed, two critical temperatures coincide and only first order phase transition remains, while the residual order parameter value is zero. Note that in the region $\lambda>4$ the order parameter behaviour close to the second order phase transition temperature is superlinear ($1/\beta=(\lambda-3)$ is larger than one), which makes distinguishing between first and second order phase transitions even harder. Being evaluated numerically, $r_{c1}(\lambda)$ and $r_{c2}(\lambda)$ do not cross at $ r\simeq 3.62, \, \lambda=5$, as expected from the analysis of the Ising model with invisible states on the complete graph \cite{Krasnytska16}. \section{Conclusions} \label{IV} Universality is one of the key principles of modern statistical physics. For lattice systems, critical properties are defined by space dimensionality, range of interaction and symmetries of the order parameter. For systems on scale-free networks, degree distribution exponent plays similar role. In addition, invisible states have been shown to influence the universality class with only changing entropic contribution to the free energy \cite{Sarkanych17,Sarkanych18}. As was shown in this paper, with these two mechanisms together, the Ising model exhibits non-trivial properties. The critical temperature is a function of both of these parameters. The phase diagram, in the region $3\leq\lambda\leq5$, now is divided into three domains with different critical behaviour: for $r\leq r_{c1}(\lambda)$ the order parameter depends on the temperature continuously, meaning that a second order phase transition occurs; $r_{c1}(\lambda)<r\leq r_{c2}(\lambda)$ at lower temperature $T^*$ the system undergoes a first order phase transition between two ordered phases, while at higher temperature $T_c$ second order phase transitions take place; finally for $r> r_{c2}(\lambda)$ only first order phase transition remains. This kind of behaviour was earlier reported for the Potts model with invisible states when $1\leq q<2$. Here we observe it even in the Ising case $q=2$. We also show, that adding invisible states does not change the values of the critical exponents in the region where the second order phase transition exist. \section*{Acknowledgement} We would like to thank Bertrand Berche, Yurij Holovatch and Ralph Kenna for fruitful discussions and useful comments.
{ "timestamp": "2019-04-16T02:11:25", "yymm": "1904", "arxiv_id": "1904.06563", "language": "en", "url": "https://arxiv.org/abs/1904.06563" }
\section*{Acknowledgements}\label{sec:Ack} The authors would like to thank Michael Stingl for the use of the Matlab routines used to visualize the results in Section~\ref{subsec:m-scale}. \section{Conclusion}\label{sec:conclusion} In this paper, we proposed a PBM method to solve the dual of the VTS formulation of the minimum compliance topology optimization problem. We compared it with the DOC method, one of the most popular methods for topology optimization, on the one hand, and with the IP method as an established method for general convex problems, on the other. The implementations of both the PBM and IP algorithms were tailored to the specific problem. All three methods used a multigrid preconditioned MINRES solver for the linear systems arising in each iteration. In our numerical experiments, the PBM method clearly came out on top. It was around 20 times faster in terms of CPU time than the OC method when requiring the same degree of optimality. Even when using a very generous stopping criterion in the OC method---one that yields visibly sub-optimal results---PBM was still faster. The IP method suffers from the characteristic ill-conditioning of the system matrix, which in some of our experiments prevented convergence altogether. Here, PBM proved to be much more robust, in addition to being considerably faster. Still, convergence was not guaranteed for all large-scale examples when sticking to the strictest stopping criterion. Judging by the symmetry and smoothness of the final design, the results were still satisfactory. Overall, the convergence behavior of the PBM method seems to be sensitive to changes in parameters such as stopping tolerances or scaling parameters. A thorough parameter study might further improve the algorithm. We did not consider the DOC method for such large-scale problems, as its expected computation time simply disqualified it as a competitor. It is however possible that it would eventually converge even for those problems where PBM does not. Note that this would most likely take days or even weeks, as compared to the typical (successful) PBM run which took less than 12 hours. Since the DOC does not feature multipliers or barrier-/penalty-parameters tending to 0, it is not as susceptible to ill-conditioning as the PBM or IP method. This means that the advantage of DOC, when compared with PBM, could be reliability, albeit at the price of serious inefficiency. \section{Introduction} \label{sec:intro} This file is documentation for the SIAM \LaTeX\ style, including how to typeset the main document, the {\scshape Bib}\TeX\xspace\ file, and any supplementary material. More information about SIAM's editorial style can be found in the style manual, available at \url{https://www.siam.org/journals/pdf/stylemanual.pdf}. The major changes in the SIAM standard class are summarized in \cref{sec:changes}. The SIAM \LaTeX\@ files can be found at \url{https://www.siam.org/journals/auth-info.php}. The files that are distributed for the standard macros are given below. \begin{itemize} \item \texttt{siamart171218.cls} (required): Main SIAM standard \LaTeX\ class file. \item \texttt{siamplain.bst} (required): Bibliographic style file for {\scshape Bib}\TeX\xspace. \item \texttt{docsiamart.tex}: Produces this documentation. \item \texttt{references.bib}: {\scshape Bib}\TeX\xspace\ database for this documentation and examples. \item \texttt{ex\_article.tex}: Template for article. \item \texttt{ex\_supplement.tex}: Template for supplement. \item \texttt{ex\_shared.tex}: Template for shared information for article and supplement. \end{itemize} To use these files, put \texttt{siamart171218.cls} and \texttt{siamplain.bst} in the directory with your paper or, alternatively, into your \LaTeX\@ and {\scshape Bib}\TeX\xspace\@ paths, respectively. The outline of a SIAM \LaTeX\ article is shown in \cref{ex:outline}. Templates are provided and discussed in more detail in \cref{sec:template}. \begin{example}[label={ex:outline},listing only,% listing options={style=siamlatex,{morekeywords=[1]{maketitle}, morekeywords=[2]{siamart171218}},}]% {Document outline} \documentclass{siamart171218} \begin{document} \maketitle \end{document} \end{example} \section{Class options} \label{sec:class-options} Class options can be included in the bracketed argument of the command, separated by commas. The possible class options are: \begin{itemize} \item \code{review} --- Recommended for submitting your manuscript to a SIAM journal. Adds line numbers as well as the statement ``This manuscript is for review purposes only'' to the bottom of each page. \item \code{final} --- Turns off the black boxes that help authors identify lines that are too long. The final published version will have this option on. \item \code{supplement} --- Specifies that the file is a supplement and not the main document, causing changes in the appearance of the title and numbering; see \cref{sec:supplement} for details. \item \code{hidelinks} --- Turns off colors on hyperlinks; see \cref{sec:cr+hyp}. The hyperlinks still exist, but there is no color to differentiate them. The final published version will have this option on. \end{itemize} \section{Front matter} \label{sec:front} The title and author parts are formatted using the standard \code{\title}, \code{\author}, and \code{\maketitle} commands as described in Lamport \cite{La86}. The title and author should be declared in the preamble. The title and author names are automatically converted to uppercase in the document. If there is more than one author, each additional author should be preceded by the \code{\and} command. The addresses and support acknowledgments are added via \code{\thanks}. Each author's thanks should specify their address. The support acknowledgment should be put in the title thanks, unless specific support needs to be specified for individual authors, in which case it should follow the author address. The header for this file was produced by the code in \cref{ex:header}, including an example of a shared footnote. Each thanks produces a footnote, so the footnote of the second author is \#3. The command \code{\headers{title}{authors}} command, with the title (possibly shortened to fit) and the authors' names, creates the page headers, automatically converted to uppercase. \examplefile[label={ex:header},listing only,% listing options={style=siamlatex,% deletetexcs={and,thanks,title,author},% {moretexcs=[2]{and,thanks,title,author,maketitle,headers,email}}} ]{Title and authors in preamble}{tmp_\jobname_header.tex} \newpage Following the author and title is the abstract, key words listing, and AMS subject classifications, designated using the \code{abstract}, \code{keywords}, and \code{AMS} environments. Authors are responsible for providing AMS numbers which can be found on the AMS web site \cite{AMSMSC2010}. The abstract, keywords, and AMS subject classifications for this document are specified in \cref{ex:abstract}. \examplefile[label={ex:abstract},% before upper={\preamble{\bs newcommand\{\bs BibTeX\}\{\{\bs scshape Bib\}\bs TeX\bs xspace\}}}, listing only,% listing options={style=siamlatex,% {morekeywords=[2]{abstract,keywords,AMS}}} ]{Abstract, keywords, and AMS classifications}{tmp_\jobname_abstract.tex} A more complete example, including a PDF supplement, that uses the included files \texttt{ex\_article.tex}, \texttt{ex\_supplement.tex}, and \texttt{ex\_shared.tex} is discussed in \cref{sec:template}. The example files can be used as a starting point for producing a document. \section{Cross references and hyperlinks} \label{sec:cr+hyp} SIAM now supports cross references and hyperlinks via the \texttt{cleveref} and \texttt{hyperef} packages, which are loaded by the class file. \subsection{Cleveref} \label{sec:cleveref} SIAM strongly recommends using the commands provided by the \texttt{cleveref} package for cross referencing. The package is automatically loaded and already customized to adhere to SIAM's style guidelines. To create a cross reference, use the command \code{\cref} (inside sentence) or \code{\Cref} (beginning of a sentence) in place of the object name and \code{\ref}. The \texttt{cleveref} package enhances \LaTeX's cross-referencing features, allowing the format of cross references to be determined automatically according to the ``type" of cross reference (equation, section, etc.) and the context in which the cross reference is used. So, the package \emph{automatically} inserts the object name as well as the appropriate hyperlink; see \cref{ex:cref}. It may require two \LaTeX\@ compilations for the references to show up correctly. Additional examples are shown in the sections below for equations, tables, figures, sections, etc. \begin{example}[label=ex:cref,bicolor,listing options={style=siamlatex,% {morekeywords=[2]{cref,ref}}}]{Advantage of using cleveref} The normal way to get a cross reference with a hyperlink requires a lot of typing: \hyperref[thm:mvt]{Theorem~\ref*{thm:mvt}}. The \texttt{cleveref} package gets both the name and hyperlink automatically using a single macro: \cref{thm:mvt}. It also handles multiple references with the same macro, such as \cref{thm:mvt,fig:pgfplots,fig:testfig}. \end{example} \subsection{Hyperef} \label{sec:hyperef} Hyperlinks are created with the \code{\href} and \code{\url} commands, as shown in \cref{ex:href}. SIAM has also defined the \code{\email} command, as shown in \cref{ex:header}. You can hide links (i.e., turn off link colors) with the \code{hidelinks} option. \begin{example}[label={ex:href},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{href,url}}}]{Creating hyperlinks} The \href{https://www.siam.org}{SIAM homepage} has general information. Note that the colored text will \emph{not} appear in the print version nor will the hyperlink be active, so the writer may want to specify the location explicitly instead by using \url{https://www.siam.org}. \end{example} Note that homepage links via \code{\url} in the \code{\thanks} environment require special formatting for the tilde (\string~) character. The formatting is used in the template and shown in \cref{ex:shared}. \section{Math and equations} \label{sec:math} Here we show some example equations, with numbering, and examples of referencing the equations. SIAM now includes the package \texttt{amsmath} by default, and we include some of its features as well, although the reader should consult the package user manual for further guidance \cite{amsmath,shortmath}. Several of the example are adapted from Mittlebach and Goossen's guide to \LaTeX~\cite{MiGo04}. \Cref{ex:textmath} is a straightforward example of inline mathematics equations that does not use any special packages or features. \begin{example}[label={ex:textmath},bicolor]{Inline math} The following shows an example of math in text: Let $S=[s_{ij}]$ ($1\leq i,j\leq n$) be a $(0,1,-1)$-matrix of order $n$. \end{example} In \cref{ex:bbm}, we show the recommended method for getting blackboard fonts using the \texttt{amsfonts} package. This is not loaded by default and must be included in the preamble. \begin{example}[label={ex:bbm},bicolor,before upper={\preamble{\bs usepackage\{amsfonts\}}},% listing options={style=siamlatex,% {morekeywords=[2]{mathbb}}}]{Blackboard math} Blackboard bold characters, such as $\mathbb{C}$ and $\mathbb{R}$, should be created with the \texttt{amsfonts} package, although this is not included by default. \end{example} \Cref{ex:smallmatrix} shows the \code{smallmatrix} environment for an inline matrix from the \texttt{amsmath} package, which is included by default. \begin{example}[label={ex:smallmatrix},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{smallmatrix}}}]{Inline matrix} Matrices of no more than two rows appearing in text can be created as shown in the next example: $B = \bigl[ \begin{smallmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{smallmatrix} \bigr]$. \end{example} Bigger matrices can be rendered with environments from the \texttt{amsmath} package, such as \code{bmatrix} and \code{pmatrix} used in \cref{ex:matrices}. \begin{example}[label={ex:matrices},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{bmatrix,pmatrix}}}]{Creating matrices} Display matrices can be rendered using environments from \texttt{amsmath}: \begin{equation}\label{eq:matrices} S=\begin{bmatrix}1&0\\0&0\end{bmatrix} \quad\text{and}\quad C=\begin{pmatrix}1&1&0\\1&1&0\\0&0&0\end{pmatrix}. \end{equation} \Cref{eq:matrices} shows some example matrices. \end{example} \newpage \Cref{ex:dmo} shows how to use the \code{\DeclareMathOperator} command from the \texttt{amsopn} package to declare the \code{\Range} macro. (This example also uses the \texttt{braket} package for the \code{\set} macro, but this is not necessarily recommended by SIAM.) \begin{example}[label={ex:dmo},% before upper={\preamble{\bs usepackage\{braket,amsfonts,amsopn\}}\\ \noindent\preamble{\bs DeclareMathOperator\{\bs Range\}\{Range\}}},% bicolor,% listing options={style=siamlatex,% {moretexcs=[2]{Range}}} ]{Declaring math operators} An example of a math operator: \begin{equation}\label{eq:range} \Range(A) = \set{ y \in \mathbb{R}^n | y = Ax }. \end{equation} \end{example} \Cref{ex:foo} shows how to use the \code{align} environment from \texttt{amsmath} to easily align multiple equations. \begin{example}[label={ex:foo},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{align}}}]{Aligned equations} \Cref{eq:a,eq:b,eq:c} show three aligned equations. \begin{align} f &= g, \label{eq:a} \\ f' &= g', \quad\text{and} \label{eq:b} \\ \mathcal{L}f &= \mathcal{L}g \label{eq:c}. \end{align} \end{example} Another way to number a set of equations is the \code{subequations} environment from \texttt{amsmath}, as shown in \cref{ex:aligned}. \begin{example}[label={ex:aligned},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{subequations}}}]{Subequations} We calculate the Fr\'{e}chet derivative of $F$ as follows: \begin{subequations} \begin{align} F'(U,V)(H,K) &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} - P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \label{eq:aa} \\ &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle \nonumber \\ &= \langle R(U,V)V\Sigma^{T},H\rangle + \langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \label{eq:bb} \end{align} \end{subequations} \Cref{eq:aa} is the first line, and \cref{eq:bb} is the last line. \end{example} ~ For an equation split over multiple lines, \cref{ex:ml} shows the usage of the \code{multline} environment provided by \texttt{amsmath}. ~ \begin{example}[label={ex:ml},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{multline}}}]{Equation split across lines} We claim that the projection $g(U,V)$ is given by the pair of matrices: \begin{multline} \label{eq:ml} g(U,V) = \biggl( \frac{R(U,V)V\Sigma^{T}U^{T} - U\Sigma V^{T}R(U,V)^{T}}{2}U,\\ \frac{R(U,V)^{T}U\Sigma V^{T}-V \Sigma^{T}U^{T}R(U,V)}{2}V \biggr). \end{multline} \end{example} \section{Theorem-like environments} \label{sec:thm} SIAM loads \texttt{ntheorem} package and uses it to define the following theorem-like environments: \code{theorem}, \code{lemma}, \code{corollary}, \code{definition}, and \code{proposition}. SIAM also defines a \code{proof} environment that automatically inserts the symbol ``$\,\proofbox\,$'' at the end of any proof, even if it ends in an equation environment. \emph{Note that the document may need to be compiled twice for the mark to appear.} Some of the calculus examples were adapted from \cite{CalcI}. \Cref{ex:theorem} shows usage of the \code{theorem} environment. An optional argument can be used to name the theorem. \Cref{ex:cor} illustrates a corollary, without a name, and the proof environment. ~ \begin{example}[label=ex:theorem,bicolor,parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{theorem}}}]{Theorem} \begin{theorem}[Mean Value Theorem]\label{thm:mvt} Suppose $f$ is a function that is continuous on the closed interval $[a,b]$. and differentiable on the open interval $(a,b)$. Then there exists a number $c$ such that $a < c < b$ and \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a}. \end{displaymath} In other words, $f(b)-f(a) = f'(c)(b-a)$. \end{theorem} \end{example} \begin{example}[label=ex:cor,bicolor,parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{corollary,proof}}}]% {Corollary and proof} \begin{corollary} Let $f(x)$ be continuous and differentiable everywhere. If $f(x)$ has at least two roots, then $f'(x)$ must have at least one root. \end{corollary} \begin{proof} Let $a$ and $b$ be two distinct roots of $f$. By \cref{thm:mvt}, there exists a number $c$ such that \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a} = \frac{0-0}{b-a} = 0. \end{displaymath} \end{proof} \end{example} SIAM also defines commands to create your own theorem- and remark-like environments: \begin{itemize} \item \code{newsiamthm} --- Small caps header, italized body. \item \code{newsiamremark} --- Italics header, roman body. \end{itemize} Each command takes two arguments. The first is the environment name, and the second is the name to show in the document. These commands should be used instead of \code{\newtheorem}. \Cref{ex:claim,ex:ref} shows how to use the commands above, including how to specify the plural version for \texttt{cleveref} if it is unusual. \begin{example}[label=ex:claim,bicolor,% before upper={\preamble{\bs newsiamthm\{claim\}\{Claim\}}\\ \noindent\preamble{\bs newsiamremark\{hypothesis\}\{Hypothesis\}}\\ \noindent\preamble{\bs crefname\{hypothesis\}\{Hypothesis\}\{Hypotheses\}}},% parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{claim,proof,hypothesis}}}]{New theorem-like environment} \begin{claim}\label{cl:constant} If $f'(x) = 0$ for all $x \in (a,b)$ then $f(x)$ is constant on $(a,b)$. \end{claim} \begin{hypothesis}\label{hyp1} The function $f$ is continuously differentiable. \end{hypothesis} \begin{hypothesis}\label{hyp2} The random variable is normally distributed. \end{hypothesis} \end{example} \begin{example}[label=ex:ref,bicolor,listing options={style=siamlatex,% {morekeywords=[2]{cref}}}]{References} We can reference multiple types of objects with a single reference: \cref{cl:constant,thm:mvt,hyp1,hyp2}. \end{example} \section{Tables} \label{sec:tab} Table captions should go above the tables. \Cref{ex:simpletable} shows the code to generate a \cref{tab:simpletable}. A more complicated example is shown in \cref{ex:table}, which generates \cref{tab:KoMa14}. This example uses subfloats via the \texttt{subfig} package, as well as special column options from the \texttt{array} package. \begin{tcbverbatimwrite}{tmp_\jobname_simpletable.tex} \begin{table}[tbhp] {\footnotesize \caption{Example table}\label{tab:simpletable} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \end{tcbverbatimwrite} \examplefile[label={ex:simpletable},% listing only, listing options={style=siamlatex}]% {Example table.}{tmp_\jobname_simpletable.tex} \input{tmp_\jobname_simpletable.tex} \begin{tcbverbatimwrite}{tmp_\jobname_table.tex} \newcolumntype{R}{>{$}r<{$}} % \newcolumntype{V}[1]{>{[\;}*{#1}{R@{\;\;}}R<{\;]}} % \begin{table}[tbhp] {\footnotesize \captionsetup{position=top} \caption{Example table adapted from Kolda and Mayo \rm{\cite{KoMa14}}.}\label{tab:KoMa14} \begin{center} \subfloat[$\beta=1$]{ \begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} & fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline 718 & 11.3476 & 0.5544 & 0.3155 & 1.2018 & 0.0977 & 45 & 0.17 & 0.06 \\ \hline 134 & 3.7394 & 0.2642 & -1.1056 & 0.2657 & -0.3160 & 31 & 0.12 & 0.05 \\ \hline 4 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.21 & 0.10 \\ \hline \end{tabular}} \subfloat[$\beta=-1$]{ \begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} & fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline 72 & -1.1507 & 0.2291 & 0.6444 & 0.3540 & -0.8990 & 34 & 0.14 & 0.06 \\ \hline 624 & -6.3985 & 0.1003 & 0.1840 & 0.5305 & 1.2438 & 48 & 0.19 & 0.08 \\ \hline 2 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.23 & 0.02 \\ \hline \end{tabular}} \end{center} } \end{table} \end{tcbverbatimwrite} \examplefile[label={ex:table},% before upper={\preamble[\scriptsize]{\bs usepackage\{array\}}\\[-0.4em] \noindent\preamble[\scriptsize]{\bs usepackage[caption=false]\{subfig\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example table with subtables.}{tmp_\jobname_table.tex} \input{tmp_\jobname_table.tex} \section{Figures} \label{sec:fig} It is recommended that all figures be generated in high resolution. In the past, SIAM has required encapsulated postscript (EPS) format for final production. This is still an acceptable format, but SIAM also now allows high-resolution PDF, JPEG, and PNG figures. If working with EPS images and using \texttt{pdflatex}, we recommend the package \texttt{epstopdf} to automatically convert EPS images to PDF for inclusion in PDF documents created by \texttt{pdflatex}. \Cref{ex:fig} shows the code to generate \cref{fig:testfig}. This example uses the \texttt{graphicx} package for the \code{\includegraphics} command. \begin{tcbverbatimwrite}{tmp_\jobname_fig.tex} \begin{figure}[tbhp] \centering \subfloat[$\epsilon_{\max}=5$]{\label{fig:a}\includegraphics{lexample_fig1}} \subfloat[$\epsilon_{\max}=0.5$]{\label{fig:b}\includegraphics{lexample_fig2}} \caption{Example figure using external image files.} \label{fig:testfig} \end{figure} \end{tcbverbatimwrite} \examplefile[label={ex:fig},% before upper={\preamble[\scriptsize]{\bs usepackage\{graphicx,epstopdf\}}\\[-0.4em] \noindent\preamble[\scriptsize]{\bs usepackage[caption=false]\{subfig\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example figure with subfigures and external files}{tmp_\jobname_fig.tex} \input{tmp_\jobname_fig.tex} Another option for figures is a graphics-generator that is platform- and format-independent. PGF is a TeX macro package for generating such graphics and works together with the most important TeX backend drivers, including pdftex and dvips. The user-friedly syntax layer called TikZ. Here we show an example using \texttt{PGFPLOTS}, useful for drawing high-quality plots directly in \LaTeX. \Cref{ex:data} and \cref{ex:pgfplots} shows the data and code, respectively, to generate \cref{fig:pgfplots}, adapted from \cite{pgfplots}. \examplefile[label={ex:data},listing only, listing options={style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example data file (data.dat)}{data.dat} \begin{tcbverbatimwrite}{tmp_\jobname_tikz.tex} \begin{figure}[tbhp] \centering \begin{tikzpicture} \begin{loglogaxis}[height=2.75in, grid=major, xlabel={Degrees of Freedom}, ylabel={$L_2$ Error}, legend entries={$d=2$,$d=3$}] \addplot table [x=d2_dof,y=d2_l2_err] {data.dat}; \addplot table [x=d3_dof,y=d3_l2_err] {data.dat}; \end{loglogaxis} \end{tikzpicture} \caption{Example \texttt{PGFPLOTS} figure.} \label{fig:pgfplots} \end{figure} \end{tcbverbatimwrite} \examplefile[label={ex:pgfplots},% before upper={\preamble[\scriptsize]{\bs usepackage\{pgfplots\}}},% listing only, listing options={% style=siamlatex}]% {Example TikZ/PGF for platform-independent graphics.}{tmp_\jobname_tikz.tex} \input{tmp_\jobname_tikz.tex} \section{Algorithms} \label{sec:algs} SIAM automatically includes the \texttt{algorithm} package in the class definition. This provides the float environment. Users have the choice of \texttt{algpseudocode}, \texttt{algorithmic}, and other packages for actually formatting the algorithm. For example, \cref{alg:buildtree} is produced by the code in \cref{ex:alg}. In order to reference lines within the algorithm, we need to tell the \texttt{cleveref} package how to do the referencing, which is the second line of \cref{ex:alg}. Then we can use the code \code{\cref{line3}} to produce \cref{line3}. \begin{tcbverbatimwrite}{tmp_\jobname_alg.tex} \begin{algorithm} \caption{Build tree} \label{alg:buildtree} \begin{algorithmic}[1] \STATE{Define $P:=T:=\{ \{1\},\ldots,\{d\}$\}} \WHILE{$\#P > 1$} \STATE\label{line3}{Choose $C^\prime\in\mathcal{C}_p(P)$ with $C^\prime := \operatorname{argmin}_{C\in\mathcal{C}_p(P)} \varrho(C)$} \STATE{Find an optimal partition tree $T_{C^\prime}$ } \STATE{Update $P := (P{\setminus} C^\prime) \cup \{ \bigcup_{t\in C^\prime} t \}$} \STATE{Update $T := T \cup \{ \bigcup_{t\in\tau} t : \tau\in T_{C^\prime}{\setminus} \mathcal{L}(T_{C^\prime})\}$} \ENDWHILE \RETURN $T$ \end{algorithmic} \end{algorithm} \end{tcbverbatimwrite} \examplefile[float=htpb,label={ex:alg},% before upper={\preamble[\scriptsize]{\bs usepackage\{algorithmic\}}\\[-0.4em] \preamble[\scriptsize]{\bs Crefname\{ALC@unique\}\{Line\}\{Lines\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example algorithm}{tmp_\jobname_alg.tex} \input{tmp_\jobname_alg.tex} \section{Sections} \label{sec:sec} Sections are denoted using standard \LaTeX\ section commands, i.e., \code{\section}, \code{\subsection}, etc. If you wish to end the section title with something other that a period (the default), you have to add the command \code{\nopunct} at the end of the title. Appendices are created with the normal sectioning commands, following the command \code{ \section{Introduction} The introduction introduces the context and summarizes the manuscript. It is importantly to clearly state the contributions of this piece of work. The next two paragraphs are text filler, generated by the \texttt{lipsum} package. \lipsum[2-3] The paper is organized as follows. Our main results are in \cref{sec:main}, our new algorithm is in \cref{sec:alg}, experimental results are in \cref{sec:experiments}, and the conclusions follow in \cref{sec:conclusions}. \section{Main results} \label{sec:main} We interleave text filler with some example theorems and theorem-like items. \lipsum[4] Here we state our main result as \cref{thm:bigthm}; the proof is deferred to \cref{sec:proof}. \begin{theorem}[$LDL^T$ Factorization \cite{GoVa13}]\label{thm:bigthm} If $A \in \mathbb{R}^{n \times n}$ is symmetric and the principal submatrix $A(1:k,1:k)$ is nonsingular for $k=1:n-1$, then there exists a unit lower triangular matrix $L$ and a diagonal matrix \begin{displaymath} D = \diag(d_1,\dots,d_n) \end{displaymath} such that $A=LDL^T$. The factorization is unique. \end{theorem} \lipsum[6] \begin{theorem}[Mean Value Theorem]\label{thm:mvt} Suppose $f$ is a function that is continuous on the closed interval $[a,b]$. and differentiable on the open interval $(a,b)$. Then there exists a number $c$ such that $a < c < b$ and \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a}. \end{displaymath} In other words, \begin{displaymath} f(b)-f(a) = f'(c)(b-a). \end{displaymath} \end{theorem} Observe that \cref{thm:bigthm,thm:mvt,cor:a} correctly mix references to multiple labels. \begin{corollary}\label{cor:a} Let $f(x)$ be continuous and differentiable everywhere. If $f(x)$ has at least two roots, then $f'(x)$ must have at least one root. \end{corollary} \begin{proof} Let $a$ and $b$ be two distinct roots of $f$. By \cref{thm:mvt}, there exists a number $c$ such that \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a} = \frac{0-0}{b-a} = 0. \end{displaymath} \end{proof} Note that it may require two \LaTeX\ compilations for the proof marks to show. Display matrices can be rendered using environments from \texttt{amsmath}: \begin{equation}\label{eq:matrices} S=\begin{bmatrix}1&0\\0&0\end{bmatrix} \quad\text{and}\quad C=\begin{pmatrix}1&1&0\\1&1&0\\0&0&0\end{pmatrix}. \end{equation} Equation \cref{eq:matrices} shows some example matrices. We calculate the Fr\'{e}chet derivative of $F$ as follows: \begin{subequations} \begin{align} F'(U,V)(H,K) &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} - P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \label{eq:aa} \\ &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle \nonumber \\ &= \langle R(U,V)V\Sigma^{T},H\rangle + \langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \label{eq:bb} \end{align} \end{subequations} \Cref{eq:aa} is the first line, and \cref{eq:bb} is the last line. \section{Algorithm} \label{sec:alg} \lipsum[40] Our analysis leads to the algorithm in \cref{alg:buildtree}. \begin{algorithm} \caption{Build tree} \label{alg:buildtree} \begin{algorithmic} \STATE{Define $P:=T:=\{ \{1\},\ldots,\{d\}$\}} \WHILE{$\#P > 1$} \STATE{Choose $C^\prime\in\mathcal{C}_p(P)$ with $C^\prime := \operatorname{argmin}_{C\in\mathcal{C}_p(P)} \varrho(C)$} \STATE{Find an optimal partition tree $T_{C^\prime}$ } \STATE{Update $P := (P{\setminus} C^\prime) \cup \{ \bigcup_{t\in C^\prime} t \}$} \STATE{Update $T := T \cup \{ \bigcup_{t\in\tau} t : \tau\in T_{C^\prime}{\setminus} \mathcal{L}(T_{C^\prime})\}$} \ENDWHILE \RETURN $T$ \end{algorithmic} \end{algorithm} \lipsum[41] \section{Experimental results} \label{sec:experiments} \lipsum[50] \Cref{fig:testfig} shows some example results. Additional results are available in the supplement in \cref{tab:foo}. \begin{figure}[htbp] \centering \label{fig:a}\includegraphics{lexample_fig1} \caption{Example figure using external image files.} \label{fig:testfig} \end{figure} \lipsum[51] \section{Discussion of \texorpdfstring{{\boldmath$Z=X \cup Y$}}{Z = X union Y}} \lipsum[76] \section{Conclusions} \label{sec:conclusions} Some conclusions here. \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \bibliographystyle{siamplain} \section{Introduction}\label{sec:Intro} The goal of topology optimization is to find an optimal geo\-metry of a solid body that maximizes its performance under certain boundary conditions, by determining an optimal distribution of material in a predefined design domain. It has many applications in industry, such as in mechanical and electrical engineering. The main challenge is the high computational cost of solving large-scale systems that arise from numerical methods to solve PDEs on high-resolution meshes. A basic example of topology optimization is the minimum compliance problem, where the deformation energy of an elastic body under prescribed loading and boundary conditions is to be minimized, given an amount of material. Relating the local stiffness of the body linearly to the continuous material distribution and employing a finite element discretization leads to the so-called \emph{variable thickness sheet} (VTS) problem \begin{equation} \label{eq:to_intro} \begin{aligned} &\min_{\rho\in\mathbb{R}^m\!,\,u\in\mathbb{R}^n} \frac{1}{2}f^\top u\\ &\mbox{subject to}\\ &\qquad K(\rho) u = f\\ &\qquad \sum_{i=1}^m \rho_i = V\\ &\qquad \rho_i\geq \urho_i, \quad i=1,\ldots,m \\ &\qquad \rho_i\leq \orho_i, \quad i=1,\ldots,m \; , \end{aligned} \end{equation} where $K(\rho) = \sum_{i=1}^m \rho_i K_i$, with $K_i\in\mathbb{R}^{n\times n}$, is the stiffness matrix and $f\in \mathbb{R}^n$ is the load vector of the finite element equilibrium equations. The design variable $\rho$ is commonly referred to as the \emph{density}, while the vector $u$ represents the nodal displacements. We assume that $K_i$ are symmetric and positive semidefinite and that $\sum_{i=1}^m K_i$ is sparse and positive definite. We also assume that the volume $V\in\mathbb{R}$ and the lower and upper bounds $\urho\in\mathbb{R}^m_+$ and $\orho\in\mathbb{R}^m_+$ are chosen such that the problem is strictly feasible. This implies $\orho>\urho$, among other things. While problem \eqref{eq:to_intro} is not itself convex, it is equivalent to a convex problem; see \cite{ben1996hidden} and Theorem~\ref{th:equiv} below. For a more detailed derivation of the VTS problem and a comprehensive treatment of the theory and applications of topology optimization, see for example \cite{bendsoe-sigmund}. The minimum compliance problem has been studied extensively. Still, it is the subject of ongoing research as higher design detail calls for higher mesh resolution, which in turn makes the problem more computationally demanding. \citeauthor{Aage_2017}, for example, performed topology optimization on a model with more than one billion elements \cite{Aage_2017}. The bottleneck of algorithms for topology optimization is usually the solution of large linear systems. Direct solvers are not a viable option, due to their computational complexity and demand on computer memory, and iterative, most typically Krylov type solvers, are given preference. Since their convergence behavior highly depends on the condition number of the system matrix, preconditioning plays a vital role. The multigrid method, introduced by \citeauthor{Brandt_1977} as a solver for boundary-value problems \cite{Brandt_1977}, has become popular as a means to precondition the system by employing it inside the iterative solvers. As early as \citeyear{maar-schulz}, \citet{maar-schulz} proposed a conjugate gradient (CG) method preconditioned by multigrid for topology optimization. Similar solvers were used in \cite{amir} and \cite{MK_Mohammed_2016}. In \cite{Aage_2017}, the authors chose a multi-layered algorithm involving two types of Krylov solvers and the geometric as well as algebraic multigrid method. We refer the reader to \cite{briggs2000multigrid} for a comprehensive introduction to the multigrid method. Beyond the issue of efficiently solving the linear systems arising within each iteration of the optimization algorithm, the total number of such iterations required to reach the optimal solution---and thus the choice of optimization method---also affects the overall time-efficiency of the algorithm. The most commonly used methods for the minimum compliance problem are the \emph{optimality criteria} (OC) method, see \cite{bendsoe-sigmund}, and the \emph{method of moving asymptotes} (MMA) \cite{svanberg}. One of the advantages of the OC method is that it is relatively simple to implement; see in particular \cite{top88}. To our knowledge, global convergence results exist only for the MMA and it is often the algorithm of choice in commercial software or large-scale applications, such as that described in \cite{Aage_2017}. Both of these methods, however, usually rely on heuristics for their stopping criteria and, in practice, display a very similar rate of convergence. A possible alternative to the aforementioned methods is the \emph{interior point} (IP) method. It has become increasingly popular in the past twenty to thirty years, particularly for convex optimization \cite{Wright_1997:IP}. Its theoretical advantage over the OC method or the MMA for convex problems lies in its rate of convergence, especially for convex quadratic problems such as \eqref{eq:to_intro}. \citet{maar-schulz} used an IP algorithm for 2D topology optimization. In \cite{jarre-kocvara-zowe}, \citeauthor{jarre-kocvara-zowe} proposed an IP method for truss topology optimization. This was later extended in \cite{MK_Mohammed_2016} to large 2D VTS problems, where it outperformed the OC method, in terms of both iterations and overall CPU time required to achieve optimality to within a certain precision. In one part of our paper, we build on this work and further improve the algorithm to apply it to large-scale 3D problems. The approach is described in Section~\ref{sec:IP} and results of some examples are presented in Section~\ref{sec:Num}. Going from 2D to 3D is by no means straightforward. The largest examples in \cite{MK_Mohammed_2016} were based on nine regular refinements of a very coarse, e.g. $2\times 2$, mesh. This resulted in 262\,144 finite elements and 526\,338 degrees of freedom (components of the displacement vector $u$). Such a problem could still be solved on a standard laptop. If we used the same refinement level in a 3D example starting with a $2\times 2\times 2$ coarse mesh, we would end up with a problem with more than 134 million finite elements and 405 million degrees of freedom. Moreover, while the stiffness matrix in 2D typically has 18 non-zero elements per row, in 3D problems this number typically goes up to 81 non-zeros, i.e., the stiffness matrix is considerably denser. All this makes much greater demands on the numerical linear algebra used in the optimization algorithm. A common problem with IP methods is the ill-conditioning of the system as the iterates approach the optimal solution. This leads to an increase in solver iterations which can make the algorithm nonviable. A class of methods that aims to counteract this problem while otherwise following a strategy similar to that of the IP method, is the class of \emph{penalty-barrier multiplier} (PBM) methods. They were first introduced in \cite{ben1997penalty}, building on the modified barrier methods proposed by \citeauthor{polyak1992modified} in \cite{polyak1992modified}. As part of the larger class of augmented Lagrangian methods, they have one particular convergence property which sets them apart from IP methods. The latter involve a sequence of barrier parameters which needs to tend to 0 for convergence to the optimal solution, this being the cause of the increasing ill-conditioning; the former feature a penalty parameter for which there exists a value larger than 0 such that the method still converges to the optimal solution. See, for example, \cite[Corollary 6.15]{Stingl_2006} for a result specific to penalty-barrier methods. PBM methods have been successfully applied to convex problems and semidefinite problems in topology optimization \cite{pennon-iter}. In Section~\ref{sec:MGNR} of this paper, a penalty-barrier method for \eqref{eq:to_intro} is introduced. In contrast to the IP method, the PBM method does not stay in the strict interior of the feasible region. This poses a problem with regard to the positive definiteness of $K(\rho)$, which depends on $\rho_i$ being strictly positive for all $i=1,\dots,m$. We circumvent this problem by applying the PBM method to the dual of \eqref{eq:to_intro}. The theoretical background for this is covered in Section~\ref{sec:PrimalDualVTS}. The PBM approach described in Section~\ref{sec:MGNR} is applied to several examples in Section~\ref{sec:Num}, in order to compare it to the IP method from Section~\ref{sec:IP}, as well as to the OC method, which is briefly described in Section~\ref{sec:OC}. Lastly, a remark on notation: throughout this paper, we use $e_i$ to denote the $i$-th canonical unit vector and $e$ to denote the vector $(1,\dots,1)^\top$ of appropriate dimension. \section{An Interior Point method for topology optimization}\label{sec:IP} In this section, we describe the primal-dual IP method used to solve \eqref{eq:to_intro}. This involves deriving the linear system to be solved in each iteration and taking Schur complements of this system in order to obtain a system that, firstly, is symmetric positive definite and, secondly, displays a structure that allows a straightforward application of the multigrid method as a preconditioner. In this, we follow \cite{MK_Mohammed_2016}. Many features of the algorithm proposed in that reference had to be changed to make it more performant and viable for 3D problems. Therefore, we include all details of the algorithm. We do not recapitulate the basics of primal-dual IP methods and instead refer the reader to \cite{Wright_1997:IP}, to name just one standard piece of literature. Some notation from the previous section will be reused below for variables that serve a similar purpose. However, the primal and dual variables $\rho$, $u$, $\alpha$, $\unu$ and $\onu$ have the same meaning in both sections. This is worthwhile to note because it means that the results from the PBM method described in the previous section and the IP method described below are directly comparable. \subsection{Primal-dual Interior Point method for the VTS problem} We start by setting up the KKT conditions for the VTS problem \eqref{eq:to_intro}. Note that the problem exhibits a ``hidden convexity'', i.e., it is not itself a convex problem but is equivalent to a different, convex problem \cite{ben1996hidden}. The strict feasibility, given for \eqref{eq:to_intro} by design---see Section \ref{sec:Intro}---translates to this equivalent problem. Hence, the Slater condition is satisfied and the KKT conditions are necessary and sufficient optimality conditions. They are given by the constraint equations in \eqref{eq:to_intro} and the equations below. \begin{align*} \dfrac{1}{2} u^\top K_i u + \alpha + \unu_i - \onu_i &= 0 \,, \quad i=1,\dots,m \\ (\rho_i - \urho_i) \unu_i &= 0 \,, \quad i=1,\dots,m \\ (\orho_i - \rho_i) \onu_i &= 0 \,, \quad i=1,\dots,m \; . \end{align*} Note that in the above, the Lagrange multipliers for the equilibrium equation constraint $K(\rho)u=f$ have already been eliminated, taking advantage of the fact that the minimum compliance problem is self-adjoint. This means that, due to our choice of objective function, the aforementioned multipliers also satisfy the equilibrium equation---with the the right-hand side only differing by a constant factor. Hence, we can directly identify them with $u$. See, for example, \cite{bendsoe-sigmund} for details. The complementarity conditions for the lower and upper bound constraints, i.e., the second and third lines in the system above, are now perturbed by replacing $0$ by barrier parameters $r>0$ and $s>0$, respectively. The resulting system of equations needs to be solved for fixed $r,s$ in each iteration of the IP algorithm. This is done approximately by performing one iteration of the Newton method. We get the following residual function for the Newton method: \begin{align*} {\rm res}(u,\alpha,\rho,\unu,\onu) \; = \; \begin{bmatrix} {\rm res}_1 \\ {\rm res}_2 \\ {\rm res}_3 \\ {\rm res}_4 \\ {\rm res}_5 \end{bmatrix} \; = \; & \begin{bmatrix} -f \\ -V \\ 0 \\ -r \, e \\ -s \, e \end{bmatrix} + \sum_{i=1}^m \rho_i \begin{bmatrix} K_i u \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \sum_{i=1}^m \dfrac{1}{2}u^\top K_i u \begin{bmatrix} 0 \\ 0 \\ e_i \\ 0 \\ 0 \end{bmatrix} \\[0.5em] + & \sum_{i=1}^m \alpha \begin{bmatrix} 0 \\ 0 \\ e_i \\ 0 \\ 0 \end{bmatrix} + \sum_{i=1}^m \unu_i \begin{bmatrix} 0 \\ 0 \\ e_i \\ (\rho_i-\urho_i) e_i \\ 0 \end{bmatrix} + \sum_{i=1}^m \onu_i \begin{bmatrix} 0 \\ 0 \\ e_i \\ (\orho_i-\rho_i) e_i \\ 0 \end{bmatrix} \end{align*} Next, we obtain the derivative of the residual function as the block matrix \begin{equation} \label{eq:gradres} \nabla_{(u,\alpha,\rho,\unu,\onu)} {\rm res}(\cdot) = \begin{bmatrix} K(\rho) & 0 & B(u) & 0 & 0 \\ 0 & 0 & e^\top & 0 & 0 \\ B(u)^\top & e & 0 & I & -I \\ 0 & 0 & \altunderline{N} & \altunderline{P} & 0 \\ 0 & 0 & -\altoverline{N} & 0 & \altoverline{P} \end{bmatrix} \; , \end{equation} where $I\in\mathbb{R}^{m\times m}$ is the identity matrix and we use the notation { \setlength{\jot}{0.8em} \begin{align*} B(u) &= \left[ K_1 u, \dots, K_m u \right] \,,\\ \altunderline{N} &= \diag( \unu ) \,,\qquad \altoverline{N} = \diag( \onu ) \,,\\ \altunderline{P} &= \diag( \rho - \urho ) \,, \qquad \altoverline{P} = \diag( \orho - \rho ) \,. \end{align*}% }% The system matrix $\nabla {\rm res}$ in \eqref{eq:gradres} is indefinite. Similar to the procedure in Section \ref{sec:MGNR}, we can reduce the above system to a positive definite one. We do this in two steps. First, we construct the Schur complement of $\nabla {\rm res}$ with respect to its invertible lower right block $\begin{bmatrix} \altunderline{P} & 0 \\ 0 & \altoverline{P} \end{bmatrix}$. We then in turn form the Schur complement of the result with respect to its lower right block; see \cite{MK_Mohammed_2016} for details. This leaves us with the matrix \begin{equation} \label{eq:S_IP} S = \begin{bmatrix} K(\rho) & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} B(u) \\ e^\top \end{bmatrix} \left( \altunderline{P}^{-1} \altunderline{N} + \altoverline{P}^{-1} \altoverline{N} \right) ^{-1} \begin{bmatrix} B(u)^\top & e \end{bmatrix} \;\, \in\mathbb{R}^{(n+1)\times(n+1)} \,. \end{equation} This matrix is positive definite as long as $\rho$ is strictly feasible and $\unu,\onu > 0$. Recall that $(K_i u)(K_i u)^\top$ has the same sparsity structure as $K_i$. Hence, the matrix $S$ in \eqref{eq:S_IP} has the same sparsity structure as that in \eqref{eq:S} in the previous section. In each iteration of the IP method, we approximately solve the nonlinear system \[ {\rm res}(u,\alpha,\rho,\unu,\onu) = 0 \] by performing one iteration of Newton's method. Instead of solving the Newton system \[ \nabla_{(u,\alpha,\rho,\unu,\onu)} {\rm res}(u,\alpha,\rho,\unu,\onu) \cdot (\Delta u, \Delta \alpha, \Delta \rho, \Delta \unu, \Delta \onu) = - {\rm res} (u,\alpha,\rho,\unu,\onu) \, , \] we solve the equivalent system \begin{equation} \label{eq:IP_system} S \; \begin{bmatrix} \Delta u \\ \Delta \alpha \end{bmatrix} = rhs \, , \end{equation} where, according to the above reduction of the system, \begin{align*} rhs = & -\begin{bmatrix} -f \\ -V \end{bmatrix} - \sum_{i=1}^m \rho_i \begin{bmatrix} K_i u \\ 1 \end{bmatrix} \\[0.5em] & - \begin{bmatrix} B(u) \\ e^\top \end{bmatrix} \left( \altunderline{P}^{-1} \altunderline{N} + \altoverline{P}^{-1} \altoverline{N} \right) ^{-1} \left( {\rm res}_3 + \altunderline{P}^{-1} {\rm res}_4 - \altoverline{P}^{-1} {\rm res}_5 \right) \, . \end{align*} From the solution of \eqref{eq:IP_system}, we can reconstruct the increment for $\rho$ using the formula \begin{equation} \label{eq:Delta_rho} \Delta \rho = - \left( \altunderline{P}^{-1} \altunderline{N} + \altoverline{P}^{-1} \altoverline{N} \right) ^{-1} \left( {\rm res}_3 + \altunderline{P}^{-1} {\rm res}_4 - \altoverline{P}^{-1} {\rm res}_5 - B(u)^\top \Delta u - \Delta \alpha \, e \right) \, . \end{equation} The increments for the Lagrange multipliers $\unu$ and $\onu$ are computed based on the stable reduction proposed in \cite{Freund_1997}, with a slight adjustment to account for the upper bound constraints not present in that paper. The multipliers are updated by the following formulas, in the following order \begin{align} \label{eq:Delta_onu} \Delta \onu &= \dfrac{1}{\orho-\urho} \left( \altunderline{P}( B(u)^\top \Delta u + \Delta \alpha \, e) - (\altunderline{N} - \altoverline{N})\rho - \left( {\rm res}_4 + {\rm res}_5 - \altunderline{P} {\rm res}_3 \right) \right) \, , \\ \label{eq:Delta_unu} \Delta \unu &= \Delta \onu - B(u)^\top \Delta u - \Delta \alpha \, e - {\rm res}_3 \, . \end{align} Once the increments have been obtained, we need to determine an appropriate step length. Our algorithm employs a long step strategy \cite{Wright_1997:IP} in that it restricts the step length mainly to guarantee feasibility of the next iterate. We do not use the same step length for all increments. Rather, $\Delta \rho$ and $\Delta u$ use the same step length, the step length for $\Delta \alpha$ is always equal to 1 and different step lengths are calculated for both $\Delta \unu$ and $\Delta \onu$. For details, see Algorithm \ref{alg:ip}. This strategy proved to be the most effective in numerical experiments. After each IP iteration, the barrier parameters are updated adaptively. For this, we compute the duality measure for the lower and upper bound constraint \[ \dfrac{ \unu^\top (\rho-\urho) }{m} \quad \text{and} \quad \dfrac{ \onu^\top (\orho-\rho) }{m} \,, \] respectively. We then scale these measures by a constant $0<\sigma_r<1$ and $0<\sigma_s<1$ to update $r$ and $s$. At this point, one unconventional feature of our algorithm should be highlighted. The new values for $r$ and $s$ are not used to construct the right hand side term for the next iteration, but rather for the iteration after that. We found that this ``iteration shift'', peculiar though it might seem, makes the algorithm significantly more efficient. Indeed, without this shift this version of the code is hardly viable and one requires several Newton iterations per IP iteration instead of just one. Finally, we require a stopping criterion for the algorithm. Just like in Algorithm~\ref{alg:pbm}, we use the duality gap $\delta(u,\alpha)$ as a measure of optimality, scaled by the current objective function---the \emph{primal} objective function $\frac{1}{2} f^\top u$, in this case. On top of this, we want to ensure that our solution is feasible to within a certain accuracy. Our feasibility measure is the following sum of weighted residuum norms \begin{equation} \label{eq:ip_res} \tres_{IP} = \dfrac{ \| {\rm res}_1 \|_2 }{ \| f \|_2 } + \dfrac{ | {\rm res}_2 | }{ V } + \dfrac{ \| {\rm res}_3 \|_2 }{ \| \unu \|_2 + \| \onu \|_2 } + \dfrac{ | e^\top {\rm res}_4 | }{ m } + \dfrac{ | e^\top {\rm res}_5 | }{ m } \, . \end{equation} Furthermore, the duality gap should be (nearly) positive, as a negative duality gap points to infeasibility. Algorithm \ref{alg:ip} sums up our IP method. The parameter values that we used in our experiments are $\tolsym{IP} = 10^{-5}$, $\sigma_r=\sigma_s=0.2$. For the initial values, we chose $u=0$, $\alpha=1$, $\rho_i=V/m$ for all $i=1,\dots,m$ and $\unu=\onu=e$. The barrier parameters start at $r=s=10^{-2}$. \begin{algorithm} \caption{Primal-dual IP} \label{alg:ip} Let $\tolsym{IP}>0$ and $0<\sigma_r,\sigma_s<1$ be given. Choose initial vectors $(u,\rho)$ and $(\alpha,\unu,\onu)$. Set barrier parameter update values as $r^+ = \sigma_r \cdot \unu^\top (\rho-\urho)/m$ and $s^+ = \sigma_s \cdot \onu^\top (\orho-\rho)/m$. \begin{algorithmic}[1] \Repeat \State Solve system \eqref{eq:IP_system} to obtain $(\Delta u, \Delta \alpha)$ \State Reconstruct $(\Delta \rho, \Delta \onu, \Delta \unu)$ using \cref{eq:Delta_rho,eq:Delta_onu,eq:Delta_unu} \State Update barrier parameters:\ $ r = r^+\,,\ s=s^+ $ \State Compute the following step lengths \begin{gather*} \kappa_{u}=\kappa_{\rho}= \min \left\{ 0.9 \cdot \min_{\Delta \rho_i>0}{\dfrac{\orho_i-\rho_i}{\Delta\rho_i}} \,,\, 0.9 \cdot \min_{\Delta \rho_i<0}{\dfrac{\urho_i-\rho_i}{\Delta\rho_i}} \,, \; 1 \right\} \\[0.5em] \kappa_{\unu} = 0.9 \cdot \min_{\Delta\unu<0} \dfrac{-\unu}{\Delta\unu}, \quad \kappa_{\onu} = 0.9 \cdot \min_{\Delta\onu<0} \dfrac{-\onu}{\Delta\onu}, \quad \kappa_{\alpha} = 1 \end{gather*} \State Update all variables \begin{gather*} u = u + \kappa_u \Delta u \,, \quad \alpha = \alpha + \kappa_\alpha \Delta \alpha \,, \quad \rho = \rho + \kappa_{\rho} \Delta \rho \,, \\[0.5em] \unu = \unu + \kappa_{\unu} \Delta \unu \,, \quad \onu = \onu + \kappa_{\onu} \Delta \onu \end{gather*} \State Compute the duality gap $\delta(u,\alpha)$ by \eqref{eq:gap}, the objective function $\frac{1}{2}f^\top u$ and the feasibility measure $\tres_{IP}$ by \eqref{eq:ip_res} \If{ $ \tolsym{IP} > \delta(u,\alpha) / (\frac{1}{2}f^\top u) > - 0.1\cdot\tolsym{IP}$ and $\tres_{IP} < 10\cdot\tolsym{IP}$} \State STOP \EndIf \State Determine barrier parameters for shifted barrier parameter update \[ r^+ = \sigma_r \cdot \dfrac{ \unu^\top (\rho-\urho) }{m} \,, \quad s^+ = \sigma_s \cdot \dfrac{ \onu^\top (\orho-\rho) }{m} \] \Until convergence \end{algorithmic} \end{algorithm} \section{The penalty-barrier multilplier method for topology optimization}\label{sec:MGNR} In this section, we describe the class of Penalty-Barrier Multiplier (PBM) algorithms and their application to the VTS problem. This class of algorithms was originally developed and analyzed by R.~Polyak under the name Modified Barrier algorithms; see, among others, \cite{polyak1988smooth,polyak1992modified}. These methods are defined for a class of ``modified" barrier functions; a particular choice of a function leads to a particular algorithm. Ben-Tal and Zibulevsky \cite{ben1997penalty} analyzed one such choice that proved to be computationally very efficient; see also \cite{kocvara2003pennon}. The PBM method was first applied to topology optimization problems in \cite{kocvara1998mechanical}. \subsection{Penalty-barrier multiplier methods} Consider a generic convex constraint optimization problem $$ \min_x \{\mathfrak{f}(x)\mid \mathfrak{g}_i(x)\leq 0,\ i=1,\ldots,m\}\,. $$ The idea of NR is to replace the inequalities by scaled inequalities $p_i\varphi\left(\frac{\mathfrak{g}_i(x)}{p_i}\right)\leq 0$ with a penalty function $\varphi$ and a penalty parameter $p_i>0$. Here, $\varphi$ is a strictly increasing, twice differentiable, real-valued, strictly convex function with dom $\varphi = (- \infty, b), \ 0 < b \leq \infty$, which has the following properties: \begin{itemize} \item[$(\varphi 1)$] $\qquad \varphi (0) = 0$ \item[$(\varphi 2)$] $\qquad \varphi^{\prime} (0) = 1$ \item[$(\varphi 3)$] $\qquad{\displaystyle \lim_{s \rightarrow b}} \ \varphi^{\prime} (s) = \infty $ \item[$(\varphi 4)$] $\qquad {\displaystyle \lim_{s \rightarrow - \infty}} \varphi^{\prime} (s) = 0 $. \end{itemize} \smallskip\noindent Then the ``penalized" problem \begin{equation}\label{eq:penalized} \min_x \{\mathfrak{f}(x)\mid p_i\varphi\left(\frac{\mathfrak{g}_i(x)}{p_i}\right)\leq 0,\ i=1,\ldots,m\} \end{equation} remains convex and has the same feasible set and thus the same solution as the original one. We formulate a standard Lagrangian function of the penalized problem that can be considered an augmented Lagrangian function of the original problem: \begin{equation}\label{eq:lagr} {\cal L}(x,\mu; p) = \mathfrak{f}(x) + \sum_{i=1}^m \mu_i p_i \varphi\left(\frac{\mathfrak{g}_i(x)}{p_i}\right)\,. \end{equation} At each iteration of the NR, we minimize the augmented Lagrangian with respect to $x$ \begin{align} \mbox{\it Step 1.} &\qquad x^{k+1} \approx \arg\min_x {\cal L}(x, \mu^k; p^k) \label{eq:105}\\ \intertext{and update the multipliers and the penalty parameter:} \mbox{\it Step 2.} &\qquad \mu_i^{k+1} = \mu_i^k \varphi^{\prime} \left(\frac{\mathfrak{g}_i(x^{k+1})} {p_i^k}\right) \label{eq:106} \\ \mbox{\it Step 3.} &\qquad p_i^{k+1} = \pi p_i^k \ . \label{eq:107} \end{align} Here $\pi<1$ is a penalty updating factor. The meaning of the ``$\approx$" sign in Step~1 is that the unconstrained minimization problem is only solved approximately, until $\|\nabla_{\!x}\,{\cal L}(x,\mu;p)\|\leq\varepsilon$, where $\varepsilon$ is some prescribed tolerance. For more details on the NR methods, analysis and numerical performance, see the references above. In Step 1 we need to solve, approximately, an unconstrained optimization problem. For this, we will use the Newton method. Therefore, we will need formulas for the gradient and Hessian of ${\cal L}$ with respect to the primal variable $x$: \begin{equation}\label{eq:lagr_grad} \nabla_{\!x}\,{\cal L}(x,\mu;p) = \nabla_{\!x}\, \mathfrak{f}(x) + \sum_{i=1}^m \mu_i \varphi' \left(\frac{\mathfrak{g}_i(x)}{p_i}\right)\nabla_{\!x}\, \mathfrak{g}_i(x) \end{equation} and \begin{equation} \label{eq:lagr_hess} \begin{aligned} \nabla^2_{\!xx}\,{\cal L}(x,\mu;p) = &\nabla^2_{\!xx}\, \mathfrak{f}(x) + \sum_{i=1}^m \frac{\mu_i}{p_i} \varphi'' \left(\frac{\mathfrak{g}_i(x)}{p_i}\right)\nabla_{\!x}\, \mathfrak{g}_i(x)(\nabla_{\!x}\, \mathfrak{g}_i(x))^\top \\ &+ \sum_{i=1}^m \mu_i \varphi' \left(\frac{\mathfrak{g}_i(x)}{p_i}\right) \nabla^2_{\!xx}\,\mathfrak{g}_i(x)\,. \end{aligned} \end{equation} Note that, due to the convexity of the penalized problem \eqref{eq:penalized}, the Hessian of ${\cal L}$ is positive semidefinite for any arguments $x\in\mathbb{R}^n$, $\mu\in\mathbb{R}^m_+$. \citeauthor{ben1997penalty} \cite{ben1997penalty} analyzed one particular choice of the penalty function $\varphi$ defined as follows: \begin{equation} \varphi_{\hat{\tau}} (\tau) = \left \{ \begin{array}{ll} \tau + \frac{1}{2} \, \tau^2 & \tau \geq \hat{\tau} \\[0.2cm] - (1+ \hat{\tau})^2 \log \left ( \frac{1+ 2 \hat{\tau} -\tau} {1 + \hat{\tau}} \right) + \hat{\tau} + \frac{1}{2} \hat{\tau}^2 \quad & \tau < \hat{\tau} \ . \end{array} \right . \label{103.1} \end{equation} By setting $\hat{\tau} = - \frac{1}{2}$, we get a pure (not shifted) logarithmic branch. As this function combines properties of the quadratic penalty function and the logarithmic barrier function, it is called a penalty-barrier function and the resulting algorithm a penalty-barrier multiplier method. This method proved to be very efficient and we will use it to solve the dual VTS problem. \subsection{PBM for the dual VTS problem} Let us now apply the PBM method to the dual problem \eqref{eq:dual}. The augmented Lagrangian for this problem is defined as \begin{equation} \label{eq:aug_lagr} \begin{aligned} {\cal L}(u,\alpha,\unu, \onu, \rho,\umu, \omu) = \ & \alpha V-f^\top u -\urho^\top\unu+\orho^\top\onu \\ &+ \sum_{i=1}^m \rho_i p_i \varphi\left(\frac{1}{p_i}(\frac{1}{2}u^\top K_i u - \alpha + \unu_i - \onu_i)\right) \\ &+ \sum_{i=1}^m \umu_i \uq_i \varphi\left(\frac{-\unu_i}{\uq_i}\right) + \sum_{i=1}^m \omu_i \oq_i \varphi\left(\frac{-\onu_i}{\oq_i}\right) \end{aligned} \end{equation} with Lagrangian multipliers $\rho\in\mathbb{R}^m$, $\umu\in\mathbb{R}^m$ and $\omu\in\mathbb{R}^m$ and penalty parameters $p\in\mathbb{R}^m$, $\uq\in\mathbb{R}^m$ and $\oq\in\mathbb{R}^m$. To simplify the notation, let us define the aggregate variable $$ \xi := (u,\alpha,\unu,\onu) $$ and vectors of penalized constraints as \begin{align*} \tg_i(\xi)&=\tg_i(u,\alpha,\unu,\onu) := \varphi\left(\frac{1}{p_i}(\frac{1}{2}u^\top K_i u - \alpha +\unu_i - \onu_i)\right), \quad i=1,\ldots,m \,,\\ \uh_i(\xi)&=\uh_i(u,\alpha,\unu,\onu) := \varphi\left(\frac{-\unu_i}{\uq_i}\right), \quad i=1,\ldots,m\,,\\ \oh_i(\xi)&=\oh_i(u,\alpha,\unu,\onu) := \varphi\left(\frac{-\onu_i}{\oq_i}\right), \quad i=1,\ldots,m\,. \end{align*} Let $s_i(\xi)$ denote the argument of $\varphi(\cdot)$ in the definition of $\tg_i(\xi)$ above. In the following, the notation $\tg'_i(\xi)$ will be understood as $\varphi^\prime(s_i(\xi))$, rather than a composite derivative of $\varphi(s_i(\xi))$ with respect to $\xi$. We define $\uh'_i(\cdot)$ and $\oh'_i(\cdot)$ analogously, as well as $\tg''(\cdot)$, $\uh''(\cdot)$ and $\oh''(\cdot)$. According to \eqref{eq:lagr_grad}, the gradient of the augmented Lagrangian with respect to the aggregate variable $\xi$ is \begin{equation} \label{eq:lagr_grad_a} \begin{aligned} &\nabla_{\!\xi}\,{\cal L}(\cdot) = \begin{bmatrix}{\rm res}_1\\{\rm res}_2\\{\rm res}_3\\{\rm res}_4\end{bmatrix =\begin{bmatrix} -f\\V\\\urho\\\orho\end{bmatrix} + \sum_{i=1}^m \rho_i\tilde{g}'_i(\xi)\begin{bmatrix} K_iu\\-1\\e_i\\-e_i\end{bmatrix} + \sum_{i=1}^m \umu_i\uh'_i(\xi)\begin{bmatrix} 0\\0\\-e_i\\0\end{bmatrix} + \sum_{i=1}^m \omu_i\oh'_i(\xi)\begin{bmatrix} 0\\0\\0\\-e_i\end{bmatrix}\,. \end{aligned} \end{equation} To further simplify the notation, we define \begin{align*} \rhop_i = \rhop_i(\xi) &:= \rho_i \tg'_i(\xi) \,, & \umup_i = \umup_i(\xi) & := \umu_i \uh'_i(\xi) \,, & \omup_i = \omup_i & := \omu_i \oh'_i(\xi) \,, \\[0.2em] \rhopp_i = \rhopp_i(\xi) &:= \dfrac{\rho_i}{p_i}\tg''_i(\xi) \,, & \umupp_i = \umupp_i & := \dfrac{\umu_i}{\uq_i} \uh''_i(\xi) \,, & \omupp_i = \omupp_i & := \dfrac{\omu_i}{\oq_i} \oh''_i(\xi) \,. \\ \end{align*} By \eqref{eq:lagr_hess}, the Hessian of the augmented Lagrangian will take the form \begin{equation}\label{eq:lagr_hess_a} \nabla^2_{(\!u,\alpha,\unu,\onu)^2}\,{\cal L}(\cdot) = \begin{bmatrix} H_{11}&H_{12}&H_{13}&H_{14}\\ H_{12}^\top &H_{22}&H_{23}&H_{24}\\ H_{13}^\top &H_{23}^\top &H_{33}&H_{34}\\ H_{14}^\top &H_{24}^\top &H_{34}^\top&H_{44} \end{bmatrix} \end{equation} where \begin{align*} H_{11} &= \displaystyle\sum_{i=1}^m \rhopp_i K_iu u^\top K_i^\top + \displaystyle\sum_{i=1}^m \rhop_i K_i,\quad H_{11}\in\mathbb{R}^{n\times n}\\ H_{12}&=-\displaystyle\sum_{i=1}^m \rhopp_i K_iu ,\quad H_{12}\in\mathbb{R}^{n\times 1}\\ H_{13}& = \left[\rhopp_1 K_1u,\ldots,\rhopp_m K_mu\right], \quad H_{13}\in\mathbb{R}^{n\times m}\\ H_{14}& = \left[-\rhopp_1 K_1u,\ldots,-\rhopp_m K_mu\right], \quad H_{14}\in\mathbb{R}^{n\times m}\\ H_{22}&=\displaystyle\sum_{i=1}^m\rhopp_i,\quad H_{22}\in\mathbb{R}\\ H_{23}&=\left[ -\rhopp_1,\ldots, -\rhopp_m\right],\quad H_{23}\in\mathbb{R}^{1\times m}\\ H_{24}&=\left[\rhopp_1,\ldots,\rhopp_m\right],\quad H_{24}\in\mathbb{R}^{1\times m}\\ H_{33}&=\diag(\rhopp_1+\umupp_1,\ldots,\rhopp_m+\umupp_m),\quad H_{33}\in\mathbb{R}^{m\times m}\\ H_{34}&=\diag(-\rhopp_1,\ldots,-\rhopp_m),\quad H_{34}\in\mathbb{R}^{m\times m}\\ H_{44}&=\diag(\rhopp_1+\omupp_1,\ldots,\rhopp_m+\omupp_m),\quad H_{44}\in\mathbb{R}^{m\times m}\,. \end{align*} By \eqref{eq:106}, the Lagrange multipliers in the PBM algorithm are never equal to zero. Hence, the matrices $H_{33}, H_{34}, H_{44}$ are diagonal and positive or negative definite, so we can easily calculate the following inverse of the lower right block of the Lagrangian, which is in turn a block diagonal matrix: $$ \begin{bmatrix}H_{33} &H_{34}\\H_{34}^\top &H_{44}\end{bmatrix}^{-1} = \begin{bmatrix}H_{33}^{-1}+H_{33}^{-1}H_{34}ZH_{34}^\top H_{33}^{-1} &-H_{33}^{-1}H_{34}Z\\ -ZH_{34}^\top H_{33}^{-1} &Z\end{bmatrix} $$ with $Z=(H_{44}-H_{34}^\top H_{33}^{-1}H_{34})^{-1}$. We will require this inverse further below. Observe that the matrix $H_{11}$ has the same sparsity structure as the ``unscaled" stiffness matrix $\displaystyle\sum_{i=1}^m K_i$. Indeed, the only non-zero components of the vector $K_iu$ are those corresponding to indices of non-zero elements of $K_i$, hence $K_i$ has the same sparsity structure as $(K_iu)(K_iu)^\top $. For this reason, the matrices $H_{13} H_{13}^\top $ and $H_{14} H_{14}^\top $ have the same sparsity structure as $H_{11}$ and thus $\displaystyle\sum_{i=1}^m K_i$. This property extends to any matrices $H_{13} D H_{13}^\top $ and $H_{14} D H_{14}^\top $, where $D$ is a diagonal matrix. We now calculate the following Schur complement matrix \begin{equation}\label{eq:S} S = \begin{bmatrix} H_{11}&H_{12}\\ H_{12}^\top &H_{22} \end{bmatrix} - \begin{bmatrix}H_{13}& H_{14}\\H_{23} &H_{24}\end{bmatrix} \begin{bmatrix}H_{33} &H_{34}\\H_{34}^\top &H_{44}\end{bmatrix}^{-1} \begin{bmatrix}H_{13}^\top &H_{23}^\top\\ H_{14}^\top&H_{24}^\top \end{bmatrix} \; \in\mathbb{R}^{(n+1)\times (n+1)} \, . \end{equation} By the previous considerations, the principal $n\times n$ submatrix of $S$ has the same sparsity structure as the stiffness matrix $\displaystyle\sum_{i=1}^m K_i$; the last row and column of $S$ are full. Figure~\ref{fig:1} shows typical examples of the sparsity structure of the Hessian of the augmented Lagrangian $\nabla^2_{\!\xi\xi}\,{\cal L}(\cdot)$ in \eqref{eq:lagr_hess_a} and the Schur complement matrix $S$. \begin{figure}[h] \begin{center} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\hsize]{spy_L.pdf} \caption{$\nabla^2 \cal L$} \end{subfigure} \hspace{2em} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\hsize]{spy_S.pdf} \caption{$S$} \label{subfig:1right} \end{subfigure} \end{center} \caption{\label{fig:1}Typical sparsity structure of the Hessian of the augmented Lagrangian for the dual topology optimization problem (left) and its Schur complement (right).} \end{figure} The first step of the PBM algorithm is to solve approximately the unconstrained minimization problem $$ \min_{u,\alpha,\unu,\onu} {\cal L}(u,\alpha,\unu, \onu, \rho,\umu,\omu) $$ by the Newton method. In every step of the Newton method, we have to solve the system of linear equations $$ \nabla^2_{(\!u,\alpha,\unu,\onu)^2}\,{\cal L} (u,\alpha,\unu,\onu, \rho,\umu,\omu)\cdot (\Delta u,\Delta\alpha,\Delta\nu) = -\nabla_{(\!u,\alpha,\unu,\onu)}\,{\cal L}(u,\alpha,\unu,\onu, \rho,\umu,\omu)\,, $$ where $(\Delta u,\Delta\alpha,\Delta\nu)$ is the Newton increment and $\Delta \nu:=(\Delta\unu,\Delta\onu)$. Equivalently, according to the above development, we can instead solve the reduced system \begin{equation}\label{eq:system} S \; \begin{bmatrix} \Delta u\\\Delta\alpha \end{bmatrix}= \mbox{\it rhs} \,, \end{equation} where, by \eqref{eq:lagr_grad_a}, \begin{align*} \mbox{\it rhs} = &-\begin{bmatrix} -f\\V\end{bmatrix} - \sum_{i=1}^m \rhop_i\begin{bmatrix} K_iu\\-1\end{bmatrix}\\[0.2em] &+ \begin{bmatrix}H_{13}&H_{14}\\ H_{23}&H_{24}\end{bmatrix} \begin{bmatrix}H_{33} &H_{34}\\H_{34}^\top &H_{44}\end{bmatrix}^{-1} \left( \begin{bmatrix}\urho\\ \orho\end{bmatrix} + \sum_{i=1}^m \rhop_i \begin{bmatrix} e_i \\ -e_i \end{bmatrix} + \sum_{i=1}^m \umup_i \begin{bmatrix} e_i \\ 0 \end{bmatrix} + \sum_{i=1}^m \omup_i \begin{bmatrix} 0 \\ -e_i \end{bmatrix} \right) \,. \end{align*} Recall that the dual problem \eqref{eq:dual} is convex, hence the Hessian of ${\cal L}$ is positive semidefinite and, consequently, so is the Schur complement $S$. The remaining component $\Delta \nu$ can be reconstructed from the solution to \eqref{eq:system} as follows: \begin{equation} \label{eq:deltanu} \begin{aligned} \Delta \nu = \; - \begin{bmatrix}H_{33} &H_{34}\\H_{34}^\top &H_{44}\end{bmatrix}^{-1} & \left( \; \begin{bmatrix}\urho\\\orho\end{bmatrix} + \sum_{i=1}^m \rhop_i \begin{bmatrix}e_i\\-e_i\end{bmatrix} + \sum_{i=1}^m \umup_i \begin{bmatrix}e_i\\0\end{bmatrix} + \sum_{i=1}^m \omup_i \begin{bmatrix}0\\-e_i\end{bmatrix} \right. \\[0.2em] & + \left. \begin{bmatrix}H_{13}^\top \\ H_{14}^\top\end{bmatrix} \Delta u + \begin{bmatrix}H_{23}^\top \\ H_{24}^\top\end{bmatrix} \Delta\alpha\right)\,. \end{aligned} \end{equation} After the augmented Lagrangian has been minimized, we check for convergence. For this, we use the duality gap $\delta(u,\alpha)$ in \eqref{eq:gap}, scaled by the dual objective function, henceforth denoted by $d(u,\alpha,\unu, \onu)$, as a measure of optimality. If convergence has not yet been achieved, the multipliers are updated, imposing the safeguard rule used in \cite{ben1997penalty}, followed by the penalty parameters. The PBM method is summarized in Algorithm~\ref{alg:pbm}. It employs the Newton method with backtracking line search using the Armijo rule; see Algorithm~\ref{alg:pbm_newton}. The stopping criterion for the Newton method uses the weighted residual term \begin{equation} \label{eq:resIP} \tres_{PBM} = \dfrac{ \| {\rm res}_1 \|_2 }{ \| f \|_2 } + \dfrac{ | {\rm res}_2 | }{ V } + \dfrac{ \| {\rm res}_3 \|_2 }{ \| \urho \|_2 + \| \orho \|_2 } \end{equation} as a measure of feasibility. The stopping parameter is adjusted adaptively in each PBM iteration and this warrants some clarification. Setting the Newton method tolerance too low in early stages of the PBM method leads to an increase in Newton iterations and thus in computational time without significantly changing the convergence behavior of the PBM. A ``soft'' tolerance of 100 times the current optimality measure has proven to be a good choice. At the same time, however, we want to guarantee that the final solution has a certain degree of feasibility, which requires the system \eqref{eq:system} to be solved to a certain accuracy. For this reason, after the final PBM iteration, we run the Newton method one more time with decreased tolerance and then update $\rho$. For the sake of completeness, it should be noted that the solution $(u,\alpha,\unu,\onu)$ obtained by this additional call to the Newton method is not guaranteed to still satisfy the stopping criterion on Line \ref{algline:pbm_stopcrit} of Algorithm \ref{alg:pbm}. It is possible that it was previously only satisfied due to the inaccuracy of the solution\footnote{Note that $\delta(u,\alpha)$ is only a valid duality gap for \emph{feasible} solutions $u$ and $\alpha$.}. In the vast majority of our numerical experiments, however, this was not an issue and $|\delta(u,\alpha)/d(u,\alpha,\unu, \onu)|$ remained below the stopping parameter $\tolsym{PBM}$. \begin{algorithm} \caption{PBM} \label{alg:pbm} Let $0<\beta< 1$, $0<\gamma< 1$, $p_{\min},\uq_{\min},\oq_{\min}>0$, $\tolsym{PBM}>0$, $\tolsym{NWT}>0$ and $\tolsym{NWT}^{\min}>0$ be given. Choose initial vectors $(u,\alpha,\unu, \onu)$ and $(\rho,\umu,\omu)$. Set $p=\uq=\oq=e\in\mathbb{R}^m$. \\[-\baselineskip] \begin{algorithmic}[1] \Repeat \State{Minimize the augmented Lagrangian \eqref{eq:aug_lagr} with respect to $(u,\alpha,\unu, \onu)$ by Algorithm~\ref{alg:pbm_newton} with stopping tolerance $\tolsym{NWT}$ \label{algline:pbm_newton}} \State{Compute the duality gap $\delta(u,\alpha)$ by \eqref{eq:gap} and the dual objective function value $d(u,\alpha,\unu, \onu)$} \If{$|\delta(u,\alpha)/d(u,\alpha,\unu, \onu)| < \tolsym{PBM}$} \label{algline:pbm_stopcrit} \State STOP \EndIf \State Update the multipliers \begin{align*} \rho_i^+ &= \rho_i \varphi^{\prime}\left(\frac{1}{p_i}(\frac{1}{2}u^\top K_i u - \alpha + \unu_i - \onu_i)\right),\quad i=1,\ldots,m \\[0.5em] \umu_i^+ &= \umu_i \varphi^{\prime}\left(\frac{\unu_i}{\uq_i}\right), \quad \omu_i^+ = \omu_i \varphi^{\prime}\left(\frac{-\onu_i}{\oq_i}\right),\quad i=1,\ldots,m \end{align*} \State If necessary, correct the multipliers such that $$ \beta\rho_i \leq \rho_i^+ \leq\frac{1}{\beta}\rho_i,\quad \beta\umu_i \leq \umu_i^+ \leq\frac{1}{\beta}\umu_i,\quad \beta\omu_i \leq \omu_i^+ \leq\frac{1}{\beta}\omu_i,\quad i=1,\ldots,m $$ and set\ \ $ \rho = \rho^+,\ \umu=\umu^+,\ \omu=\omu^+.$ \label{algline:pbm_update_mult} \State{Update the penalty parameters $$ p_i = \max\{\gamma\, p_i,p_{\rm min}\},\quad \uq_i = \max\{\gamma\, \uq_i,\uq_{\rm min}\}, \quad \oq_i = \max\{\gamma\, \oq_i,\oq_{\rm min}\}, $$ for $i=1,\ldots,m$ \State Update the stopping tolerance for Algorithm~\ref{alg:pbm_newton} $$ \tolsym{NWT} = \max \left\{\, \min \left\{ 100 \cdot \left|\dfrac{\delta(u,\alpha)}{d(u,\alpha,\unu, \onu)} \right| \,, \tolsym{NWT} \right\} \; , \; \tolsym{NWT}^{\min} \, \right\} $$ } \Until{convergence} \State Set $\tolsym{NWT} = 10\cdot\tolsym{PBM}$ and repeat Line \ref{algline:pbm_newton} \State Update $\rho$ as done in Line \ref{algline:pbm_update_mult} \end{algorithmic} \end{algorithm} Our choice of parameters in Algorithm~\ref{alg:pbm} was $\beta = \gamma = 0.3$, $p_{\min}=\uq_{\min}=\oq_{\min} = 10^{-8}$, $\tolsym{NWT}=1$ and $\tolsym{NWT}^{\rm min}=10^{-3}$. The initial values were $u=0$, $\alpha=1$, $\unu=\onu=e$, $\rho_i=V/m$, for all $i=1,\dots,m$, and $\umu=\omu=e$. Note that Algorithm~\ref{alg:pbm_newton} is an inexact Newton method, which uses a preconditioned Krylov subspace method, as described later in Section~\ref{sec:MG}. Let us reiterate that the principal $n\times n$ submatrix of $S$ in \eqref{eq:system} has the same sparsity structure as the stiffness matrix $K(\rho)$. This will allow us in Section~\ref{sec:MG} to develop a multigrid preconditioner using the standard prolongation/restriction operators for the stiffness matrix. \begin{algorithm} \caption{PBM NEWTON} \label{alg:pbm_newton} Let vectors $(u,\alpha,\nu)$ and $(\rho,\umu,\omu)$ be given, using $\nu=(\unu,\onu)$. Let $\tolsym{NWT}$ be given. \begin{algorithmic}[1] \Repeat \State{Compute matrix $S$ from \eqref{eq:S} and the corresponding right hand side} \State{Solve (approximately) the linear system \eqref{eq:system} to find $\Delta u, \Delta\alpha$} \State{Compute $\Delta\nu$ from \eqref{eq:deltanu} with data $\Delta u, \Delta\alpha$} \State{Perform backtracking line search with Armijo rule to find step length $\kappa$} \State{Update $u,\alpha,\nu$: $ u = u + \kappa\Delta u,\quad \alpha = \alpha + \kappa\Delta \alpha,\quad \nu = \nu + \kappa\Delta \nu $ } \If{$\tres_{PBM} < \tolsym{NWT}$} \State {STOP} \EndIf \Until convergence \end{algorithmic} \end{algorithm} \section{Multigrid preconditioned Krylov subspace methods}\label{sec:MG} In the previous sections, we have introduced three algorithms for the solution of the VTS problem, all of which have one thing in common: In every iteration, we have to solve a system of linear equations \begin{equation}\label{eq:lineq} Az=b \,, \end{equation} where $b\in\mathbb{R}^n$ and $A$ is a $n\times n$ symmetric positive definite matrix. In the OC method, $A$ is the stiffness matrix $K(\rho)$ of the linear elasticity problem. In algorithms PBM and IP, $A$ corresponds to the Schur complements $S$ from equations \eqref{eq:S} and \eqref{eq:S_IP}, respectively. These latter two matrices have the same sparsity structure. In particular, the principal $(n-1)\times (n-1)$ submatrix of $S$ has the same sparsity structure as the stiffness matrix $K(\rho)$; the last row and column of $S$ are full. In this section, we will recall an iterative method that is known to be very efficient for linear elasticity problems on well structured finite element meshes. Throughout the section, we will use the notation of \eqref{eq:lineq}. \subsection{Multigrid preconditioned MINRES} We use standard V-cycle correction scheme multigrid method with coarse level problems $$ A_k z^{(k)}=b^{(k)},\quad k=1,\ldots,\ell-1\,, $$ where $$ A_{k-1} = I_k^{k-1} (A_k) I_{k-1}^k,\quad b^{(k-1)} = I_k^{k-1} (b^{(k)}),\quad k=2,\ldots,\ell\,. $$ Here we assume that there exist $\ell-1$ linear operators $I_k^{k-1}:\mathbb{R}^{n_k}\to\mathbb{R}^{n_{k-1}}$, $k=2,\ldots,\ell$, with $n:=n_\ell>n_{\ell-1}>\cdots>n_2>n_1$ and let $I_{k-1}^k:=(I_k^{k-1})^T$. As a smoother, we use the Gauss-Seidel iterative method. See, e.g., \cite{hackbusch} for details. Although the multigrid method is very efficient, an even more efficient tool for solving (\ref{eq:lineq}) may be a preconditioned Krylov type method, whereas the preconditioner consists of one V-cycle of the multigrid method\footnote{We found more than one V-cycle to not be as efficient in terms of overall CPU time.}. After experimenting with several Krylov methods, we found that the MINRES algorithm \cite{paige1975solution} is the most robust for our problems in which the system matrix may converge to a positive semidefinite matrix. We use the standard implementation of MINRES from \cite{barrett1994templates}. \subsection{Multigrid MINRES for PBM, IP and OC} In all examples in Section~\ref{sec:Num}, we use hexahedral finite elements with trilinear basis functions for the displacement variable $u$ and constant basis functions for the variable $\rho$, as is the standard in topology optimization. We start with a very coarse mesh and use regular refinement of each element into 8 new elements. The prolongation operators $I_{k-1}^k$ for the variable $u$ are based on a standard 27-point interpolation scheme. For more details, see, e.g., \cite{hackbusch}. When solving the linear systems \eqref{eq:system} and \eqref{eq:IP_system} in PBM and IP, we also need to prolong and restrict the single additional variable $\lambda$; here we simply use the identity. When we use the regular finite element refinement mentioned above, the stiffness matrix $K(\rho)$ will be sparse and, if a reasonably good numbering of the nodes is used, banded. The number of non-zero elements in a row of $K(\rho)$ does not exceed 81. A typical non-zero structure of $K$ is shown in Figure~\ref{subfig:1right}, if we ignore the additional last column and row in that figure. As usual, the MINRES method is stopped whenever \begin{equation}\label{eq:cgstop} \| r \|\leq \tolsym{MR}\|b\| \,, \end{equation} where $ r $ is the residuum and $b$ the right-hand side of the linear system, respectively. The choice of the stopping parameter $\tolsym{MR}$ varies between the different algorithms. \paragraph{Multigrid MINRES for OC}\label{sec:CGOC} The only degree of freedom in the algorithm is the stopping criterion. The required accuracy of these solutions (such that the overall convergence is maintained) is well documented and theoretically supported in the case of the IP method; it is, however, an unknown in the case of the DOC method; see \cite{amir} for detailed discussion. Clearly, if the linear systems in the DOC method are solved too inaccurately, the whole method may diverge or just oscillate around a point which is not the solution. In all our numerical experiments, we used $\tolsym{MR}=10^{-4}$. In \cite{MK_Mohammed_2016}, it was observed that, with this stopping criterion, the number of DOC iterations was almost always the same, whether we used an iterative or a direct solver for the linear systems. Our experiments with 3D problems confirmed this observation. \paragraph{Multigrid MINRES for PBM} The initial stopping parameter $\tolsym{MR}$ scales with the size of the problem, as it can otherwise be too strict for large problems or too imprecise for small problems. We initialize and update it in the following way: \begin{itemize} \item We start with $\tolsym{MR}=10^{-4}\sqrt{n}$. \item Let $\tres_{PBM}$ be the sum of the residua computed in the current step of the PBM Newton Algorithm~\ref{alg:pbm_newton} and let $\tres_{PBM}^+$ be this sum in the following step. If $\tres_{PBM}^+>0.9\,\tres_{PBM}$, we update $\tolsym{MR} := \max\{0.1\,\tolsym{MR},10^{-9}\}$. In other words, we increase the accuracy of the stopping parameter whenever we do not achieve a satisfactory improvement in feasibility and optimality with the current $\tolsym{MR}$. \end{itemize} In our numerical tests, the update had to be done only in a few cases and the smallest value of $\tolsym{MR}$ needed was $\tolsym{MR}=10^{-3}$. \paragraph{Multigrid MINRES for IP} In the IP method, we use an adaptive updating scheme for the stopping parameter, based on the complementarity of the current solution: \begin{itemize} \item We start with $\tolsym{MR} = 10^{-2}$ \item We compute $$ d = \max \left\{ \max_{i=1,\dots,m} | (\rho_i - \urho_i) \unu_i | \,, \max_{i=1,\dots,m} | (\orho_i - \rho_i) \onu_i | \right\} $$ and set $\tolsym{MR} := \max\{ 100\,d, 10^{-9} \} $ if this new value is lower than the current $\tolsym{MR}$. The low minimum value of $10^{-9}$ for $\tolsym{MR}$ has proven to be necessary for convergence in our experiments. \end{itemize} \section{Numerical experiments}\label{sec:Num} We now present and compare numerical results for the PBM, IP and DOC methods. In Section \ref{subsec:m-scale}, we focus on a rigorous comparison of the performance of the three algorithms, both in terms of CPU time and required calls to the iterative solver. For this, we look at problems where the number of finite elements is in the order of $10^4$ to $10^5$. As we will see, the PBM method outperforms both the IP and the DOC methods. When we consider problems with over a million finite elements in Section \ref{subsec:l-scale}, we only present results for IP and PBM, since DOC with our required accuracy is no longer practicable. In the formulation of the VTS problem \eqref{eq:to_intro}, we chose the lower bounds $\urho$ to be positive. As far as the underlying physical model is concerned, however, $\urho=0$ would make the most sense, with $\rho_i=0$ corresponding to an element without material. A lower bound larger than zero might distort the, as it were, physically more accurate results. Yet the strict positivity is required for the positive definiteness of $K(\rho)$ and to bound the condition number of the system matrices arising in the different methods. In our experiments, this turned out to be critical for the OC and IP, but not for the PBM method. Therefore, we generally set $\urho=0$ for PBM only. The code was implemented in Matlab, outsourcing certain subroutines to C via MEX files. No parallelization was performed in any of our functions. While the Matlab inbuilt routines are in general parallelizable, on the BlueBEAR HPC system used to produce the large-scale results in Section~\ref{subsec:l-scale}, it was limited to a single core. The design domain for each of our example problems is set up in a way that is based on a multigrid-structure. It is a cuboid defined by $m_x\times m_y\times m_z$ cubes of equal size corresponding to the coarse level finite element mesh. We refine the coarse mesh regularly $\ell-1$ times, giving us $\ell$ mesh levels in total; each cube element is refined into 8 new elements of equal dimensions. Hence level-2 refinement of a $4\times 2 \times 2$ coarse mesh with 16 elements results in a $8\times 4 \times 4$ mesh with 128 element and level-$\ell$ refinement of the same coarse mesh results in a mesh with $16\cdot 8^{\ell-1}$ elements. We consider two sets of boundary conditions and loading scenarios, referring to the first one as ``cantilever" and to the second one as ``bridge"; see Figure~\ref{fig:mesh23} for the specifications. Cantilever problems have all nodes on the left-hand face fixed in all directions; a load in direction $z$ is applied in the middle of the right-hand face (Figure~\ref{subfig:mesh2}). The bridge problems are subject to a uniform load applied on a rectangle centered on the top face; all four corners of the bottom face are fixed in all directions (Figure~\ref{subfig:mesh3}). We adopt the following naming convention for the problems solved in this paper: \begin{description} \item{\bfseries CANT-{\boldmath $m_x$}-{\boldmath $m_y$}-{\boldmath $m_z$}-{\boldmath $\ell$}} for a cantilever with a $m_x\times m_y\times m_z$ coarse mesh and $\ell$ mesh levels; \item{\bf BRIDGE-{\boldmath $m_x$}-{\boldmath $m_y$}-{\boldmath $m_z$}-{\boldmath $\ell$}} for a bridge with a $m_x\times m_y\times m_z$ coarse mesh and $\ell$ mesh levels. \end{description} \begin{figure}[h] \begin{center} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\hsize]{fig_mesh2.pdf} \caption{cantilever} \label{subfig:mesh2} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\hsize]{fig_mesh3.pdf} \caption{bridge} \label{subfig:mesh3} \end{subfigure} \end{center} \caption{\label{fig:mesh23}Boundary conditions and loads for cantilever and bridge problems.} \end{figure} \subsection{Comparison of PBM, IP and OC}\label{subsec:m-scale} The problems in this section have been solved on a 2018 MacBook Pro with 2.3GHz dual-core Intel Core i5, Turbo Boost up to 3.6GHz, and 16GB RAM. This allowed us to properly compare the CPU timing; but it also prevented us from solving large scale problems, due to memory limitations. The results for those problems, run on a HPC computer, are reported in the next section. \paragraph{Example CANT-16-2-2-5} In Table~\ref{tab:2} we present results for problem CANT-16-2-2-5 with 262\,144 finite elements. The lower bound for $\rho$ was set to zero for the PBM method and to $\urho=10^{-7}$ for the IP and DOC methods. Each table row shows the results for a certain method and stopping parameter. They are given in terms of the total number of linear systems solved\footnote{This is equal to the number of Newton iterations in the case of the PBM method. For the other two algorithms, there is no difference between the number of ``outer'' iterations and the number of Newton iterations.}; the total number of MINRES iterations; the total CPU time needed to solve the problem; the CPU time spent on solving the linear systems; and the final value of the primal objective function, where the accurate digits\footnote{Digits are assumed to be accurate when the different methods all appear to converge to them.} are in bold. Because the IP method had difficulties getting below our stopping threshold $\tolsym{IP}=10^{-5}$, we also ran this method with $\urho=10^{-3}$ for comparison, since this improves the conditioning of the system matrix. The resulting objective value is not comparable with the other values and is thus grayed out. We ran the DOC method with three different stopping tolerances. While $\tolsym{DOC}=10^{-2}$ would be used to mimic the {\tt top88} code, we can see in Figure~\ref{fig:OC} that the final result delivered with this tolerance is by no means optimal and clearly differs from that obtained with $\tolsym{DOC}=10^{-5}$ (for better transparency, Figure~\ref{fig:OC} present results of a smaller problem CANT-16-2-2-4). Decreasing $\tolsym{DOC}$ to $10^{-3}$ improves the result but the image is still visibly different from the optimal one. This is despite the five correct significant digits in the objective function, reached by DOC with $\tolsym{DOC}=10^{-3}$. The results produced by PBM and IP were ``visually identical" to that for DOC with $\tolsym{DOC}=10^{-5}$ in Figure~\ref{subfig:OCc}. (Of course, this ``visual comparison" is not rigorous but, in the end, the image is the required result of topology optimization in practice; a rigorous comparison is given in Table~\ref{tab:2}.) We also ran the PBM method with a lower stopping tolerance $\tolsym{PBM}=10^{-6}$ to demonstrate that the method can reach higher precision with only relatively few additional iterations. The numbers in Table~\ref{tab:2} show that PBM clearly outperforms the other two methods, both with respect to the number of MINRES iterations and to the CPU time required by the whole algorithm and the linear solver only. It is even faster than the DOC method with the very relaxed stopping tolerance $\tolsym{DOC}=10^{-2}$, at the same time delivering a solution of much higher quality. \begin{table}[htbp] \centering \caption{Example CANT-16-2-2-5 by different methods. Problem dimensions: $m = 262\,144,\ n=836\,352$.} \renewcommand{\textbf}[1]{\fontseries{b}\selectfont #1\normalfont} \begin{tabular}{l r r r r r S[table-number-alignment = center, detect-weight, mode=text, table-format = 3.6]} \toprule & stop&\multicolumn{2}{c}{iterations} & \multicolumn{2}{c}{CPU time [s]} \\ method &tol & Nwt/OC & MINRES & total & lin~solv & \multicolumn{1}{r}{obj fun}\\ \midrule PBM & $10^{-5}$ & 42 & 156 & 916 & 317 & \textbf{66.192}8136\\ PBM & $10^{-6}$ & 48 & 259 & 1110 & 420 & \textbf{66.19273}18\\ IP & $10^{-5}$ & 36 & 4992 & 6510 & 5670 & \textbf{66.19272}49\\ IP($\urho=10^{-3}$) & $10^{-5}$ & 29 & 1243 & 1510 & 1070 & {\color{gray} 66.1988863}\\ DOC & $10^{-2}$ & 38 & 394 & 1160 & 883 & \textbf{66.2}107223\\ DOC & $10^{-3}$ & 226 & 2462 & 6710 & 5020 & \textbf{66.193}4912\\ DOC & $10^{-5}$ & 2759 & 30325 & 82500 & 61900 & \textbf{66.19272}72\\ \bottomrule \end{tabular}% \label{tab:2}% \end{table}% \begin{figure}[h] \begin{center} \begin{subfigure}{0.78\textwidth} \includegraphics[width=\hsize]{fig_OC-2_16224.pdf} \caption{$\tolsym{DOC} = 10^{-2}$} \label{subfig:OCa} \end{subfigure} \quad \begin{subfigure}{0.78\textwidth} \includegraphics[width=\hsize]{fig_OC-3_16224.pdf} \caption{$\tolsym{DOC} = 10^{-3}$} \label{subfig:OCb} \end{subfigure} \quad \begin{subfigure}{0.78\textwidth} \includegraphics[width=\hsize]{fig_PBM_16224.pdf} \caption{$\tolsym{DOC} = 10^{-5}$} \label{subfig:OCc} \end{subfigure} \end{center} \caption{\label{fig:OC}CANT-16-2-2-4, DOC result with $\tolsym{DOC} = 10^{-2}$, $\tolsym{DOC} = 10^{-3}$, $\tolsym{DOC} = 10^{-5}$. Figure (c) is identical with IP and PBM results. Only elements with density values of $\rho_i>0.1$ are shown in order to make the differences visible.} \end{figure} Below are some further, detailed observations: \begin{itemize} \item The PBM iterations are very robust in terms of MINRES iterations needed to solve the linear systems. Up to the very last PBM iterations, MINRES only requires 1--3 steps to reach the required accuracy. Even in the last PBM iterations, the number of MINRES steps typically does not exceed 15--20. One reason for this is presumably that the updating scheme for the MINRES tolerance $\tolsym{MR}$ (see Section~\ref{sec:MG}) only rarely needs to update the value. With $\tolsym{MR}$ thus decreasing only very slowly, the linear systems never have to be solved to a very high accuracy. Still, the PBM solution displays the required optimality and feasibility. \item The IP method is much more sensitive to ill-conditioning. While in the first IP iterations MINRES only requires 1--2 steps, this number then quickly increases when nearing the required IP stopping criterion. In the CANT-16-2-2-5 problem with $\urho=10^{-3}$, the number of MINRES steps in the IP Newton iterations grew as follows: 1--1--1--1--1--1--1--1--1--1--1--2--3--3--6--5--7--11--13--23--35--49--55--25--66-466--149--314. \item The number of MINRES steps in every DOC iteration is almost constant. In the CANT-16-2-2-4 problem, this number was between 8 and 11 in the first 49 DOC iterations and 12 for all remaining DOC iterations, even with the stopping tolerance $\tolsym{DOC}=10^{-5}$. The total number of DOC iterations, however, grows dramatically when higher precision in the stopping criterion is required. \item Because of the way that $\rho$ is computed in the different algorithms, the volume constraint is not satisfied to the same degree of accuracy in each case. The OC method yields the most accurate $\rho$ with respect to the volume constraint, while the PBM solution generally gives $\sum_i \rho_i > V$. The PBM solution deviation from $V$ was never more than one permille in our experiments. \end{itemize} \paragraph{Example BRIDGE-4-2-2-6} We now present some results of the BRIDGE problem. Table~\ref{tab:5} shows the iteration numbers and CPU times for BRIDGE-4-2-2-6 with 524\,288 finite elements. Compared to CANT-16-2-2-x, the stiffness matrix in these problems (and thus the Schur complement for each method) has a higher condition number, due to the different shape of the computational domain. \begin{table}[htbp] \centering \caption{Example BRIDGE-4-2-2-6 solved by different methods. Problem dimensions: $m = 524\,288,\ n=1\,635\,063$.} \renewcommand{\textbf}[1]{\fontseries{b}\selectfont #1\normalfont} \begin{tabular}{l r r r r r S[table-number-alignment = center, detect-weight, mode=text, table-format = 3.6]} \toprule & stop&\multicolumn{2}{c}{iterations} & \multicolumn{2}{c}{CPU time [s]} \\ method &tol & Nwt/OC & MINRES & total & lin~solv & \multicolumn{1}{r}{obj fun}\\ \midrule PBM & $10^{-5}$ & 57 & 330 & 2710 & 1020 & \textbf{42.000}2293\\ PBM & $10^{-6}$ & 62 & 423 & 3000 & 1190 & \textbf{42.00015}61\\ IP & $10^{-5}$ & 49 & 2919 & 6040 & 4320 & \textbf{42.000}2076\\ IP($\urho=10^{-3}$) & $10^{-5}$ & 51 & 2965 & 6210 & 4440 & {\color{gray} 42.0027639}\\ DOC & $10^{-2}$ & 99 & 1454 & 6800 & 5330 & \textbf{42.00}14281\\ DOC & $10^{-3}$ & 278 & 4139 & 19200 & 15100 & \textbf{42.000}2175\\ DOC & $10^{-5}$ & 659 & 9854 & 45900 & 36100 & \textbf{42.00015}23\\ \bottomrule \end{tabular}% \label{tab:5}% \end{table}% \subsection{Large scale problems}\label{subsec:l-scale} In this section, we do not include the CPU times needed to solve the example problems. This is because they were solved on the Linux HPC BlueBEAR with 2000 cores of different types, with up to 498 GB RAM per core. We did not have any control over which cores were used for which job, so that the time statistics could not be used for reliable performance comparison. Furthermore, recall that Matlab only ran on a single core on BlueBEAR, so that the total computation time for any example would most likely not be competitive compared with any parallelized code. We present results for the PBM and IP algorithms only. As we have seen in Section~\ref{subsec:m-scale}, they are both several times faster than the DOC method for the same degree of accuracy. This does not improve with larger problem sizes, which means that the OC method might take several days to solve a problem which is solved in just a few hours by the PBM method. To solve the BRIDGE-$4$-$2$-$2$-$5$ and BRIDGE-$4$-$2$-$2$-$6$ problems, for example, the OC method requires roughly 17 times as much CPU time as the PBM method. This factor is even larger for CANT-$16$-$2$-$2$-$4$ and CANT-$16$-$2$-$2$-$5$. Comparisons for CANT-$4$-$2$-$2$-$5$ and CANT-$4$-$2$-$2$-$5$, which are not included here, gave a factor of over 20. As before, we set $\urho=0$ for the PBM method. For IP, we chose $\urho=10^{-3}$, as ill-conditioning becomes critical in the large-scale problems covered in this section. Even with this lower bound, IP was not able to solve all of the examples we considered. When it failed, no convergence was apparent once the duality gap had gotten below a certain threshold, which was typically still two or three orders of magnitude too large for the stopping criterion. The same two problems are considered as in the previous section, namely CANT-$m_x$-$m_y$-$m_z$-$\ell$ and BRIDGE-$m_x$-$m_y$-$m_z$-$\ell$. In this section, we fix the width and height of the design domain to $m_y=m_z=2$ and vary the length $m_x=2,4,6,8$. We ran the code with $\ell=5,6,7$ mesh levels. Tables~\ref{tab:CANT_large} and \ref{tab:BRIDGE_large} show the results for \mbox{CANT-$m_x$-$m_y$-$m_z$-$\ell$} and BRIDGE-$m_x$-$m_y$-$m_z$-$\ell$ in terms of iteration numbers. The optimal designs produced by the PBM method can be seen in Figures~\ref{fig:CANT8227} and \ref{fig:BRIDGE8227}. The VTS solution typically has a large ``gray area'', i.e., $\rho_i$ is well within the interval $[\,\urho,\orho\,]$ for the majority of elements. This makes it less straightforward to interpret the solution as a discrete design than it is in the case of the SIMP formulation \cite{bendsoe-sigmund}. We must determine a cut-off value $\rho^{*}$ such that all elements with $\rho_i<\rho^{*}$ are ignored. As the design domain is elongated, the density distribution further does not change in a linear fashion. Rather, the gray area is spread disproportionately more thin while most solid elements are clustered along the boundary. Therefore, instead of choosing a constant cut-off value, we found that the most consistent way to plot the results was to consider only the densest elements which add up to a fixed proportion $cV$ of allowed volume, where we chose $c=0.8$. \newcommand*{-- & --}{-- & --} \begin{table} \caption{Example CANT-$m_x$-$m_y$-$m_z$-$\ell$ solved by IP and PBM. Overall IP/PBM iterations, Newton iterations and MINRES iterations. Non-default parameters: (1) $\gamma=\beta=0.5$ and initial $\tolsym{MR}=10^{-5}\sqrt{n}$.} \label{tab:CANT_large} \centering \begin{tabular}{l @{\hspace{0.8em}} S[table-number-alignment = right, table-figures-integer=8, table-figures-decimal = 0] S[table-number-alignment = right, table-figures-integer=8, table-figures-decimal = 0] *{5}r } \toprule \multicolumn{3}{c}{Problem dimensions} & \multicolumn{2}{c}{IP} & \multicolumn{3}{c}{PBM} \\ \cmidrule(lr){1-3} \cmidrule(lr){4-5} \cmidrule(lr){6-8} $m_x$-$m_y$-$m_z$-$\ell$ & \multicolumn{1}{r}{$m$} & \multicolumn{1}{r}{$n$} & IP/Nwt & MR & PBM & Nwt & MR \\ \midrule 2-2-2-5 & 32768 & 104544 & 31 & 368 & 16 & 50 & 175 \\ 4-2-2-5 & 65536 & 209088 & 28 & 570 & 15 & 57 & 153 \\ 6-2-2-5 & 98304 & 313632 & 26 & 467 & 14 & 45 & 84 \\ 8-2-2-5 & 131072 & 418176 & 27 & 489 & 14 & 45 & 115 \\[0.5em] 2-2-2-6 & 262144 & 811200 & 46 & 1195 & 18 & 60 & 141 \\ 4-2-2-6 & 524288 & 1622400 & 42 & 2465 & 17 & 59 & 118 \\ 6-2-2-6 & 786432 & 2433600 & 39 & 1015 & 17 & 66 & 157 \\ 8-2-2-6 & 1048576 & 3244800 & 39 & 1079 & 16 & 57 & 88 \\[0.5em] 2-2-2-7 & 2097152 & 6390144 & 71 & 2383 & 22 & 66 & 70 \\ 4-2-2-7 & 4194304 & 12780288 & 54 & 3543 & 20 & 57 & 67 \\ 6-2-2-7 & 6291456 & 19170432 & 57 & 2667 & 19 & 60 & 68 \\ 8-2-2-7 & 8388608 & 25560576 & 58 & 2335 & 29\rlap{\hspace{1pt}${}^1$} & 64\rlap{\hspace{1pt}${}^1$} & 100\rlap{\hspace{1pt}${}^1$} \\ \bottomrule \end{tabular} \end{table} \begin{table} \caption{Example BRIDGE-$m_x$-$m_y$-$m_z$-$\ell$ solved by IP and PBM. Overall IP/PBM iterations, Newton iterations and MINRES iterations. When $\tres_{PBM}$ in the final PBM iteration did not go below $\tolsym{NWT}$, the value at the accepted solution is given. Non-default parameters: (1) $\gamma=\beta=0.5$; (2) initial $\tolsym{MR}=10^{-5}\sqrt{n}$; (3) initial $\tolsym{MR}=10^{-5}\sqrt{n}$ and $\tolsym{NWT}=0.1$. } \label{tab:BRIDGE_large} \centering \begin{tabular} {l @{\hspace{0.8em}} S[table-number-alignment = right, table-figures-integer=8, table-figures-decimal = 0] S[table-number-alignment = right, table-figures-integer=8, table-figures-decimal = 0] *{5}r @{\hspace{1.7em}} S[table-format = 1.2e2] } \toprule \multicolumn{3}{c}{Problem dimensions} & \multicolumn{2}{c}{IP} & \multicolumn{4}{c}{PBM} \\ \cmidrule(lr){1-3} \cmidrule(lr){4-5} \cmidrule(lr){6-9} & \multicolumn{1}{r}{$m$} & \multicolumn{1}{r}{$n$} & IP & MR & PBM & Nwt & MR & \multicolumn{1}{c}{$\tres_{PBM}$} \\ \midrule 2-2-2-5 & 32768 & 107799 & 24 & 387 & 15 & 49 & 220 \\ 4-2-2-5 & 65536 & 212343 & 25 & 590 & 14 & 45 & 155 \\ 6-2-2-5 & 98304 & 316887 & 26 & 778 & 15 & 55 & 309 \\ 8-2-2-5 & 131072 & 421431 & 25 & 1050 & 13 & 47 & 184 \\[0.5em] 2-2-2-6 & 262144 & 823863 & 41 & 2029 & 15 & 56 & 263 \\ 4-2-2-6 & 524288 & 1635063 & 50 & 2965 & 15 & 57 & 317 \\ 6-2-2-6 & 786432 & 2446263 & -- & -- & 15 & 61 & 466 \\ 8-2-2-6 & 1048576 & 3257463 & -- & -- & 15 & 62 & 592 \\[0.5em] 2-2-2-7 & 2097152 & 6440055 & 91 & 4744 & 26\rlap{\hspace{1pt}${}^1$} & 109\rlap{\hspace{1pt}${}^1$} & 1134\rlap{\hspace{1pt}${}^1$} & 1.14e-4 \\ 4-2-2-7 & 4194304 & 12830199 & -- & -- & 26\rlap{\hspace{1pt}${}^1$} & 99\rlap{\hspace{1pt}${}^1$} & 718\rlap{\hspace{1pt}${}^1$} & 2.68e-4 \\ 6-2-2-7 & 6291456 & 19220343 & -- & -- & 25\rlap{\hspace{1pt}${}^{1,3}$} & 98\rlap{\hspace{1pt}${}^{1,3}$} & 743\rlap{\hspace{1pt}${}^{1,3}$} & 2.22e-4 \\ 8-2-2-7 & 8388608 & 25610487 & -- & -- & 25\rlap{\hspace{1pt}${}^{1,2}$} & 97\rlap{\hspace{1pt}${}^{1,2}$} & 707\rlap{\hspace{1pt}${}^{1,2}$} & 1.17e-3 \\ \bottomrule \end{tabular} \end{table} \begin{figure} \caption{Optimal density $\rho$ for CANT-8-2-2-7. The elements with the lowest density values are hidden such that the visible element densities add up to $0.8\cdot V$.} \centering \includegraphics[trim=0 60pt 20pt 40pt, clip, width=0.8\textwidth]{% interiorPD-main-PBMdual-cantilever-82277-20190313-084347-VRC-2.png} \label{fig:CANT8227} \end{figure} \begin{figure} \caption{Optimal density $\rho$ for BRIDGE-8-2-2-7. The elements with the lowest density values are hidden such that the visible element densities add up to $0.8\cdot V$.} \centering \includegraphics[trim=0 60pt 20pt 40pt, clip, width=0.8\textwidth]{% interiorPD-main-PBMdual-4point-82277-20190314-031840-cont-01-VRC-4} \label{fig:BRIDGE8227} \end{figure} To solve some of the examples by the PBM method, we had to deviate from the choice of parameters specified earlier. For some examples with $\ell=7$ refinement levels, we set $\gamma=\beta=0.5$, rather than $\gamma=\beta=0.3$. Otherwise, the penalty parameters are scaled down too fast for these largest examples, so that the system becomes too ill-conditioned before we reach optimality. For the specific example CANT-8-2-2-7, we set the initial $\tolsym{MR}=10^{-5}\sqrt{n}$, because this additional accuracy was required for convergence. Such non-default parameter choices are indicated in Tables~\ref{tab:CANT_large} and \ref{tab:BRIDGE_large}. It needs to be said that even with such adjustments, the PBM algorithm did not solve all problems to the specified accuracy. For all BRIDGE-$m_x$-$m_y$-$m_z$-$\ell$ examples with $\ell=7$, it failed either close to or in the last iteration, after $\delta(u,\alpha) / \frac{1}{2}f^\top u$ had dropped below $\tolsym{PBM}=10^{-5}$ and $\tolsym{NWT}$ had been set to $10^{-4}$. The residual term $\tres_{PBM}$, defined in \eqref{eq:resIP}, did not go below $\tolsym{NWT}$ as required. This was because at a certain point, the approximate solutions of the reduced Newton system \eqref{eq:system} were no longer directions of descent, presumably due to numerical errors. In these cases, we accepted the, as it were, nearly optimal solutions at which the algorithm stalled. The iteration numbers we list in the table are those after which no further change in residual values is seen. Note that $\tres_{PBM}$ was well below $10^{-3}$ for all cases except BRIDGE-8-2-2-7, and the scaled duality gap of the accepted solution was always below $10^{-5}$. It is evident from Tables~\ref{tab:CANT_large} and \ref{tab:BRIDGE_large}, that the PBM method is both more efficient and more robust than the IP method. In both cases, the use of a multigrid preconditioner for the MINRES solver achieves the desired result in that the number of MINRES iterations grows sublinearly with the size of the system, if at all. The CANT-$m_x$-$m_y$-$m_z$-$\ell$ example even displays a decrease in MINRES iterations with larger system size in some cases. However, this is probably not representative and a possible explanation involves the parameter $\tolsym{MR}$: since its initial value scales with the problem size, it might simply be chosen lower than necessary for the smaller problems. \section{Optimality Condition (OC) method}\label{sec:OC} To get a broader picture, we will compare the PBM and IP algorithms with the established and commonly used Optimality Condition (OC) method. We will therefore briefly introduce the OC algorithm for VTS. For more details, see \cite[p.308]{bendsoe-sigmund} and the references therein. We adapt the algorithm implemented in the popular code {\tt top88.m} \cite{top88}; see Algorithm~\ref{alg:doc}. We call it damped OC (DOC) method, due to the exponent $q\leq 1$ that shortens the ``full" OC step. We use the standard value $q=0.5$. \begin{algorithm} \caption{DOC} \label{alg:doc} Let $\rho\in\mathbb{R}^m$ be given such that $\sum_{i=1}^m \rho_i = V$, $ \urho_i \leq \rho_i \leq \orho_i$, $i=1,\ldots,m$. Set $\tau_{\scriptscriptstyle \alpha}=0.1\,\tolsym{DOC}$ and $q\leq 1$. \begin{algorithmic}[1] \Repeat \State{$u=(K(\rho))^{-1} f$} \State{$\oal=10000$, $\ual=0$} \While{$\dfrac{\oal-\ual}{\oal+\ual}>\tau_{\scriptscriptstyle \alpha}$} \State{$\alpha = (\oal+\ual)/2$} \State{ $\rho_i^+ = \min\left\{\max\left\{\rho_i \displaystyle\frac{(u^T K_i u)^q}{\alpha}, \urho_i\right\},\orho_i\right\}\,, \quad i=1,\ldots,m$} \State{If $\sum_{i=1}^m \rho_i^+>V$ then set $\ual=\alpha$; else if $\sum_{i=1}^m \rho_i^+\leq V$ then set $\oal=\alpha$} \EndWhile \If{$\|\rho^+-\rho\|_{\inf} \leq \tolsym{DOC}$} \State STOP \EndIf \State{$\rho = \rho^+$} \Until convergence \end{algorithmic} \end{algorithm} Following \cite{top88}, we use the stopping criterion $$ \|\rho^+-\rho\|_{\inf} \leq \tolsym{DOC}\,, $$ where $\rho$ and $\rho^+$ are the two most recent iterates. While {\tt top88.m} uses $\tolsym{DOC} = 10^{-2}$, we found that this value is too generous in many 3D examples, resulting in an image that is significantly different from an image obtained with $\tolsym{DOC} \leq 10^{-3}$; see Figure~\ref{fig:OC} in Section~\ref{sec:Num}, where we address the choice of $\tolsym{DOC}$ in a bit more detail. Another parameter we changed, as compared to \cite{top88}, was the value of the stopping criterion for the bisection method $\tau_{\scriptscriptstyle \alpha}$. In Section \ref{sec:Num}, we use $\tau_{\scriptscriptstyle \alpha}=0.1\,\tolsym{DOC}$ which leads to a more stable behaviour of the DOC and only marginal increase of total CPU time. The reader may ask about the relation of the DOC stopping criterion (using difference of variables in two subsequent iterations) with the more rigorous criterion based on the duality gap, used in the PBM and IP algorithms. Our experiments revealed a somewhat surprising phenomenon: in most of the problems we solved, the behaviour of the two stopping measures was almost identical. This experience justifies the use of the DOC stopping criterion and, in particular, the relative fairness of our comparisons of DOC with PBM and IP. \section{Dual VTS problem}\label{sec:PrimalDualVTS} Consider the variable thickness sheet problem \eqref{eq:to_intro}. Following \cite{Ben-Tal1993,kocvara2017truss} in the context of equivalent formulations for truss topology optimization, we can formulate a dual problem to \eqref{eq:to_intro}: \begin{equation} \label{eq:dual} \begin{aligned} &\min_{u\in\mathbb{R}^n\!,\,\alpha\in\mathbb{R},\,\unu,\,\onu\in\mathbb{R}^m} \alpha V-f^\top u -\urho^\top \unu+\orho^\top \,{\onu}\\ &\mbox{subject to}\\ &\qquad \frac{1}{2}u^\top K_iu\leq \alpha-\unu_i+{\onu}_i, \quad i=1,\ldots,m \\ &\qquad {\unu}_i\geq 0,\quad i=1,\ldots,m\\ &\qquad {\onu}_i\geq 0,\quad i=1,\ldots,m\,. \end{aligned} \end{equation} \begin{theorem}\label{th:equiv} Problems \eqref{eq:to_intro} and \eqref{eq:dual} are equivalent in the following sense: \begin{itemize} \item[(i)] If one problem has a solution then also the other problem has a solution and $$ \min\eqref{eq:to_intro} = \min\eqref{eq:dual}\,. $$ \item[(ii)] Let $(u^*,\alpha^*,\unu^*,\onu^*)$ be a solution to \eqref{eq:dual}. Further, let $\tau^*$ be the vector of Lagrangian multipliers for the inequality constraints associated with this solution. Then $(u^*,\tau^*)$ is a solution of \eqref{eq:to_intro}. Moreover, $$ \unu^*_i\onu^*_i = 0,\quad i=1,\ldots,m. $$ \item[(iii)] Let $(u^*,\rho^*)$ be a solution of \eqref{eq:to_intro}. Further, let $\urr^*$ and $\orr^*$ be the Lagrangian multipliers associated with the lower and upper bounds on $\rho$, respectively, and let $\lambda^*$ be the multiplier for the volume constraint. Then $(u^*,\lambda^*,\urr^*,\orr^*)$ is a solution of \eqref{eq:dual}. \end{itemize} \end{theorem} \begin{proof} We will first write \eqref{eq:to_intro} equivalently as \begin{equation} \min_{\stackrel{\urho_i \leq \rho_i \leq \orho_i}{\sum_{i=1}^m \rho_i = V}} \max_{u\in\mathbb{R}^n}\ f^\top u - \frac{1}{2} u^\top K(\rho)u \,.\label{compliance_b_var} \end{equation} Indeed, as $K(\rho)$ is by assumption positive semidefinite, the necessary and sufficient optimality condition for the inner maximization problem is $K(\rho)u=f$ and, using this, the optimal value of the maximization problem is $\frac{1}{2}f^\top u$. Problem \eqref{compliance_b_var} is convex (actually linear) and bounded in $\rho$ and concave in $u$, so we can switch ``max'' and ``min'' (see, e.g., \cite{ekeland-temam}) to get an equivalent problem: $$ \max_{u\in\mathbb{R}^n}\inf_{\stackrel{\urho_i \leq \rho_i \leq \orho_i}{\sum_{i=1}^m \rho_i = V}} f^\top u - \frac{1}{2} u^\top K(\rho)u \,. $$ Due to our assumption of strict feasibility, there exists a Slater point for the feasible set of the inner (convex) optimization problem, so we may replace it by its Lagrangian dual. The Lagrangian multipliers for the inequalities will be denoted by $\urr\in\mathbb{R}^m_+$ and $\orr\in\mathbb{R}^m_+$, that for the volume equality constraint by $\lambda\in\mathbb{R}$: \begin{equation} \label{eq:qcqp_th1} \max_{u\in\mathbb{R}^n}\max_{\stackrel{\scriptstyle \lambda\in\mathbb{R}}{\urr\in\mathbb{R}^m_+, \orr\in\mathbb{R}^m_+}}\inf_{\rho\in\mathbb{R}^m_+}\ f^\top u - \frac{1}{2} u^\top K(\rho)u + \lambda(\sum_{i=1}^m \rho_i - V) -\urr^\top (\rho-\urho) +\orr^\top (\rho-\orho) \,. \end{equation} We can include the non-negativity constraint on $\rho$ in the inner-most optimization problem because we know that the solution to \eqref{eq:qcqp_th1} satisifies $\rho\ge\urho\ge 0$. Now regard the dual problem \eqref{eq:dual}. It can equivalently be formulated as the following min-max problem, using a partial Lagrangian function with multiplier $\tau\in\mathbb{R}^m$: $$ \min_{\stackrel{\scriptstyle u\in\mathbb{R}^n}{\stackrel{\scriptstyle\alpha\in\mathbb{R}} {\unu\in\mathbb{R}^m_+,\onu\in\mathbb{R}^m_+}}} \max_{\tau\in\mathbb{R}^m_+}\ \alpha V - f^\top u -\unu^\top \urho + \onu^\top \orho + \sum_{i=1}^m \tau_i(\frac{1}{2} u^\top K_iu - \alpha+\unu_i - \onu_i) $$ which can be rearranged further to give \begin{equation} \label{eq:qcqp_th2} \min_{\stackrel{\scriptstyle u\in\mathbb{R}^n}{\stackrel{\scriptstyle\alpha\in\mathbb{R}} {\unu\in\mathbb{R}^m_+,\onu\in\mathbb{R}^m_+}}} \max_{\tau\in\mathbb{R}^m_+}\ \frac{1}{2} u^\top K(\tau)u - f^\top u + \alpha(V-\sum_{i=1}^m \tau_i) +\unu^\top (\tau-\urho) -\onu^\top (\tau-\orho) \,. \end{equation} Identifying $\tau$, $\alpha$, $\unu$, and $\onu$ with $\rho$, $\lambda$, $\urr$, and $\orr$, respectively, and changing the sign of the objective function (and thus changing ``max'' to ``min'' and ``min'' to ``max''), we can see that \eqref{eq:qcqp_th1} and \eqref{eq:qcqp_th2} are equivalent. For later reference, note that the multiplier $\tau$ of the dual problem corresponds to the primal variable $\rho$, the density. The second part of (ii) is obvious from the fact that $\unu$ and $\onu$ are multipliers for the lower and upper bounds, so only one of them can be positive (only one bound can be active) for each component. \end{proof} Notice that \eqref{eq:dual} is a convex optimization problem, as $K_i$ are positive semidefinite. We finish this section with another formulation of the dual VTS problem that allows us to easily compute the duality gap (this formulation was first derived in \cite{Ben-Tal1993}). \begin{theorem} Problem \eqref{eq:dual} is equivalent to an unconstrained nonsmooth problem \begin{align} \label{eq:qcqp_c} &\max_{u\in\mathbb{R}^n,\alpha\in\mathbb{R}} -\alpha V+f^\top u + \sum_{i=1}^m \min\{(\alpha-\frac{1}{2} u^\top K_iu)\urho_i,(\alpha-\frac{1}{2} u^\top K_iu)\orho_i\} \end{align} in the following sense: \begin{itemize} \item[(i)] $\min\eqref{eq:dual}= - \max\eqref{eq:qcqp_c}$; \item[(ii)] Let $(u^*,\alpha^*,\unu^*,\onu^*)$ be a solution of \eqref{eq:dual}. Then $(u^*,\alpha^*)$ is a solution of~\eqref{eq:qcqp_c}. Conversely, every solution $(u^*,\alpha^*)$ of \eqref{eq:qcqp_c} is a part of~a solution of \eqref{eq:dual}. \end{itemize} \end{theorem} \begin{proof} We will show that \cref{eq:dual} and \cref{eq:qcqp_c} are equivalent reformulations of each other. Introducing an auxiliary variable $s\in\mathbb{R}^m$, problem \eqref{eq:qcqp_c} can be directly re-written as \begin{align*} &\max_{\scriptstyle u\in\mathbb{R}^n,\alpha\in\mathbb{R},s\in\mathbb{R}^m} -\alpha V+f^\top u + \sum_{i=1}^m s_i \\ &\mbox{\rm subject to}\\ &\qquad (\alpha-\frac{1}{2}u^\top K_iu)\orho_i\geq s_i, \quad i=1,\ldots,m \\ &\qquad (\alpha-\frac{1}{2}u^\top K_iu)\urho_i\geq s_i, \quad i=1,\ldots,m \,. \end{align*} The constraints in the above problem can be written as $$ (\alpha-\frac{1}{2}u^\top K_iu)\geq \max\{\frac{s_i}{\urho_i},\frac{s_i}{\orho_i}\}\,, \quad i=1,\ldots,m \,. $$ Noting that $\orho>\urho\geq 0$, we define \begin{alignat*}{2} & \unu_i = \frac{s_i}{\urho_i}\,,\ \onu_i=0\,, && \quad\text{if}\quad \frac{s_i}{\urho_i}>\frac{s_i}{\orho_i} > 0\\ & \unu_i = 0\,,\ \onu_i=-\frac{s_i}{\orho_i}\,, && \quad\text{if}\quad \frac{s_i}{\urho_i}\leq\frac{s_i}{\orho_i} \leq 0\,. \end{alignat*} Then the above set of constraints can also be written as $$ (\alpha-\frac{1}{2}u^TK_iu)\geq \unu_i - \onu_i \quad i=1,\ldots,m \,. $$ Obviously, these $\unu_i,\onu_i$ also satisfy the non-negativity constraints. Lastly, we can reformulate the objective function to match \eqref{eq:dual}, since $$ \sum_{i=1}^m {\urho_i}\unu_i - \sum_{i=1}^m {\orho_i}\onu_i = \sum_{i:\frac{s_i}{\urho_i}>\frac{s_i}{\orho_i}} {\urho_i}\frac{s_i}{\urho_i} + \sum_{i: \frac{s_i}{\urho_i}\leq\frac{s_i}{\orho_i}} {\orho_i}\frac{s_i}{\orho_i} = \sum_{i=1}^m s_i\,. $$ We switch the sign of the objective function and claims (i) and (ii) follow. \end{proof} Assume that $(u,\alpha)$ is a feasible point in the dual problem \eqref{eq:dual} such that there exist $\rho$ satisfying $K(\rho)u= f$ and $(\rho,u)$ is feasible in the primal problem \eqref{eq:to_intro}. We then have the following formula for the duality gap: \begin{equation} \label{eq:gap} \begin{aligned} \delta(u,\alpha) :=& \; \min{\eqref{eq:to_intro}} - \max{ \eqref{eq:qcqp_c} } \\ =& -\frac{1}{2}f^\top u + \alpha V - \sum_{i=1}^m \min\left\{\urho_i(\alpha-\frac{1}{2}u^\top K_iu),\orho_i(\alpha-\frac{1}{2}u^\top K_iu)\right\}\,. \end{aligned} \end{equation}
{ "timestamp": "2019-04-16T02:10:57", "yymm": "1904", "arxiv_id": "1904.06556", "language": "en", "url": "https://arxiv.org/abs/1904.06556" }
\section{Introduction} Over the last two decades, probabilistic topic modeling (\emph{topic modeling} for short) has become an active sub-field of information retrieval and machine learning. Topic modeling may be considered a refinement of document clustering and comes as an unsupervised machine learning approach in its basic versions: as opposed to pure document clustering, \emph{topic modeling allows for many topics to occur in a single document} but still mandates common topics across the documents of a training collection. Hereby, each topic is typically represented via a multinomial distribution over the collection's vocabulary. Related ideas and solutions were formed in the two seminal publications on \emph{probabilistic Latent Semantic Indexing} (pLSI) (\cite{hofmann:1999:uai}) and \emph{Latent Dirichlet Allocation} (LDA) (\cite{blei:2003:lda:944919.944937}). \cite{Pritchard-Stephens-Donnelly:2000:Genetics} proposed a model similar to LDA independently in the field of population genetics. Besides classical text document analysis and genetics, topic modeling has turned out to be of use in bio-informatics \citep{topicmodelingbio:2016}, digital libraries \citep{griffiths-steyvers:2004:pnas}, recommender systems \citep{hu-hall-attenberg:2014:kdd}, computing in the service of political and social studies (``digital humanities'') \citep{journaldh:2012} and other application areas (e.g.~see \cite{boydgraber-hu-mimno:2017:ftir}). Regarding pure document clustering, the two major machine learning directions are \emph{Expectation Maximization} (EM) including $k$-Means on the one hand and hierarchical clustering including agglomerative clustering on the other hand. In comparison, EM-based techniques have also been a central means for topic inference but \emph{the opportunities of hierarchical clustering for topic modeling have been overlooked to date.} In this paper, we aim to partially close this gap by developing and evaluating \emph{Topic Grouper} as \emph{a topic modeling approach based on agglomerative clustering}. Important benefits of agglomerative clustering for topic modeling lie in its simplicity, absence of hyper parameters, deep hierarchical structures of topics as well as the ability to find even conceptually narrow topics. A major challenge is to determine a well-founded cluster distance with reasonable predictive qualities and computational performance. The remainder of this article is structured as follows: Section~\ref{sec:related-work} describes relevant related work. It also outlines basic concepts behind topic models and summarizes the hyper parameter problem for LDA. Section~\ref{sec:theory} introduces the generative model behind Topic Grouper and derives a related cluster distance. Moreover, a corresponding algorithm for model computation is presented and its complexity is assessed. Section~\ref{sec:evaluation} includes a range of experiments comparing the performance of Topic Grouper with two LDA variants. A synthetic dataset allows for applying error rate as a quality measure. Regarding real-world datasets covering retailing and text, we resort to perplexity. In addition, Section~\ref{featurered} examines Topic Grouper as a feature reduction method for text classification and compares it to LDA as well as to two common text-oriented feature selection techniques. Section~\ref{viz} discusses approaches to inspect learned models and reports on related examples for a larger text dataset. Section~\ref{sec:discussion} summarizes and discusses our findings. Section~\ref{sec:conclusion} gives pointers to possible future work. \section{Basic Concepts and Related Work} \label{sec:related-work} \subsection{Agglomerative Clustering} Clustering items of data, such as sets of vectors of numbers by similarity is an old idea. \emph{Hierarchical agglomerative clustering} (HAC) or simply agglomerative clustering is the process of clustering the clusters in turn iteratively, based on a similarity measure between clusters from a previous iteration. It was first described in the 1960s by authors including\cite{ward:1963:jamstat}, \cite{Lance-Williams:1966:Nature,Lance-Williams:1967:ComputerJ}, and others. A \emph{cluster distance} is usually the term for the inverse of a similarity measure underlying a clustering procedure. Standard cluster distances derived from the so-called Lance-Williams formula include single linkage, complete linkage and group average linkage, but many others have been proposed (see,~\cite{Murtagh83,xu-survey-clustering-algorithms-2005}). Cluster distances, such as the one developed here, may not necessarily meet standard mathematical distance axioms, as agglomerative clustering can do without (\cite{ward:1963:jamstat}). Moreover, our cluster distance is \emph{model-based}, as it is governed by a simple generative model. Model-based agglomerative clustering has rarely been investigated: \cite{Kamvar:2002:IEC:645531.656166} give a model-based interpretation of some standard cluster distances and partly extend them under the same framework. \cite{Vaithyanathan:2000:MHC:2073946.2074016} develop a recursive probabilistic model for a clustering tree in order to explain the data items merged at each tree node. The model is applied to the case of pure document clustering. For efficiency reasons the authors resort to a mix of agglomerative and flat clustering. A common critique of agglomerative clustering is its relatively high time complexity typically amounting to $O(k^2)$ or more given the number of data items $k$ (\cite{xu-survey-clustering-algorithms-2005}). Also, space complexity is often in $O(k^2)$ depending on the chosen cluster distance. In the case of our contribution and additionally in the case of text, \emph{$k$ corresponds to the vocabulary size}, \emph{which can be limited} even for large text collections, e.g.~by simple filtering criteria such as high document frequency. This offers the potential for a reasonable computational overhead in the context of topic modeling. A major asset of agglomerative clustering is the \emph{tree structure of its clusters} often assumed to reflect containment hierarchies. Also, it is widely held that agglomerative clustering offers better and more computationally stable clusters than competing procedures such as $k$-Means (\cite{Jain88}, p.~140). For further exposition, we refer the reader to recent text books on the topic (e.g.~\cite{Xu-Wunsch:2008,Everitt-etal:2011}) and various survey papers (e.g.~\cite{Murtagh83}, \cite{Jain:1999:DCR:331499.331504} and \cite{xu-survey-clustering-algorithms-2005}). \subsection{Probabilistic Topic Modeling} \label{sec:concepts} Topic modeling evolved from \emph{Latent Semantic Analysis} (LSA) -- an algebraic dimensionality reduction technique using \emph{Singular Value Decomposition} to retain the $n$ largest singular values which show the dimensions with the greatest variance between words and documents (\cite{deerwester-etal:1990:jasist}). \emph{Latent Semantic Indexing} is the application of LSA to document indexing and retrieval (\cite{hofmann:1999:sigir}). A drawback of LSA is the lack of a probabilistic interpretation. This was first addressed by pLSI in \cite{hofmann:1999:uai}. In their influential paper, \cite{blei:2003:lda:944919.944937} describe LDA and extend pLSI by two Dirichlet priors, thus completing the generative approach and aiding in smoothing of the resulting models. In the following, we briefly reiterate such non-hierarchical or \emph{flat} topic models in order to provide the foundation for our own method. \newpage \subsubsection{Non-Hierarchical Topic Models} Let \begin{itemize} \item $D$ be the set of training documents with size $|D|$, \item $V$ be the vocabulary of $D$ with size $|V|$, \item $f_d(w)$ be the frequency of a word $w \in V$ with regard to $d \in D$. \end{itemize} Given a set of topic references $T$ with $|T| = n$, the goal of non-hierarchical or flat topic modeling is to estimate respectively consider $n$ topic-word distributions $p(w|t)_{w \in V}$ (one for each $t \in T$) and $|D|$ document-topic distributions $p(t|d)_{t \in T}$ (one for each $d \in D$). Together, these distributions are meant to maximize $p(D) = \prod_{d \in D} p(d)$, where $p(d)$ is the probability of all word occurrences in $d$ regardless of their order. Yet, how this is done in detail, depends on the topic modeling approach: Under pLSI (\cite{hofmann:1999:uai}) we have \[p(d) = c_d \cdot \prod_{w \in V} p(w|d)^{f_d(w)} \textrm{and}\ p(w|d) = \sum_{t \in T} p(w|t) \cdot p(t|d).\footnote{The factor $c_d = (\sum_{w \in V} f_d(w))! / \prod_{w \in V, f_d(w) > 0} f_d(w)!$ accounts for the underlying ``bag of words model'' where word order is ignored. It is usually omitted in publications because if two approaches are compared, the expression turns out to be an identical factor for both approaches (\cite{buntine2006}). We therefore also set $c_d := 1$.}\] The $n$ topic-word distributions form a corresponding topic model $\phi = \{ \phi_t \}$. Each $\phi_t = p(w|t)_{w \in V}$ represents the essence of a topic, where $t$ itself is just for reference. As a more sophisticated Bayesian approach, LDA puts all potential topic-word distributions under a Dirichlet prior $\beta$ in order to determine $p(D)$ (\cite{blei:2003:lda:944919.944937}). In this case, an approximation of \begin{equation} \label{eq:lda1} \Phi = argmax_\phi ((\prod_{t \in T} p(\phi_t)) \cdot \prod_{d \in D} p(d| \phi, \alpha \textbf{m}))\ \textrm{with}\ \phi_t \sim Dirichlet(\beta) \end{equation} may be considered a topic model (\cite{blei:2003:lda:944919.944937}). Hereby, $\alpha \textbf{m}$ is an additional Dirichlet prior to determine \begin{equation} \label{eq:lda2} p(d|\phi, \alpha \textbf{m}) = \int p(\theta_d) \cdot \prod_{w \in V} (\sum_{t \in T} \phi_t(w) \cdot \theta_d(t))^{f_d(w)} d\theta_d\ \textrm{with}\ \theta_d \sim Dirichlet(\alpha \textbf{m}). \end{equation} Alternatively to the $argmax$ operator, $\phi$ may be integrated out leading to a corresponding point estimate for $\Phi$ (\cite{griffiths-steyvers:2004:pnas}). Considering training results, $\Phi$ plays the same role as a distribution $p(V|t)$ under pLSI. With this in mind, \emph{we often use the letter $\Phi$ for topic models regardless of the underlying modeling approach.} A similar concession holds for document-topic distributions $p(T|d)$. There exist several methods and various derived algorithms readily available to approximate $\Phi$ under LDA including variational Bayes, MAP estimation and Gibbs sampling (e.g.~see \cite{Asuncion:2009:SIT:1795114.1795118}). \subsubsection{Hyper Parameter Optimization for LDA} \label{ldahparam} LDA is a very successful method, but suffers from the need for setting several hyper parameters. This sections gives a brief overview of the issue as relevant for evaluations in Section \ref{sec:evaluation}. Besides the number of topics $n$, standard LDA has two hyper parameters that must be adjusted for model computation (\cite{blei:2003:lda:944919.944937,conf/nips/wallachmm09,Asuncion:2009:SIT:1795114.1795118}): \begin{itemize} \item The vector $\alpha \textbf{m} \in \Re^n$ with $\sum_{i = 1}^n \textbf{m}_i = 1$, $\textbf{m}_i > 0$ for all $i$ and $\alpha > 0$ where $\alpha$ is called the \emph{concentration parameter}: $\alpha \textbf{m}$ parametrizes which document-topic distributions $\theta_d$ from Equation \ref{eq:lda2} are more or less probable (regardless of $d$). For practical concerns $\alpha \textbf{m}$ is often set with $\textbf{m}_i = 1/n$. This case is called ``symmetric'' since the concentration parameter $\alpha$ remains as the only degree of freedom for $\alpha \textbf{m}$. \item The vector $\beta\in \Re^{|V|}, \beta_{i} > 0$ for all $i$ (sometimes also named $\eta$): $\beta$ parametrizes, which topic-word distributions $\phi_t$ from Equation \ref{eq:lda1} are more or less probable (regardless of $t$). $\beta$ is usually kept symmetric. \end{itemize} When applying LDA, there are different approaches to determine reasonable values for $n$, $\alpha$ and $\beta$: $n$ is often varied via a parameter search (e.g. in \cite{blei:2003:lda:944919.944937, griffiths-steyvers:2004:pnas, Asuncion:2009:SIT:1795114.1795118}) with a range typically between 10 and 1000 and a step size of 10. The optimization criterion is high log probability or equivalently low perplexity for held out test documents $D_{test}$. Regarding the optimization of $\alpha \textbf{m}$, the following options are of practical relevance: \begin{itemize} \item If one decides for a symmetrical $\alpha$, a hyper parameter search may be performed (\cite{Asuncion:2009:SIT:1795114.1795118}). The optimization goal is the same as for $n$, but one does not use test documents as $\alpha$ is considered a more integral part of the training process. \item A simpler approach for a symmetrical $\alpha$, well established in practice, is to apply a \emph{heuristic} from \cite{griffiths-steyvers:2004:pnas} by setting $\alpha = 50/n$. \item Another technique is to (re-)estimate $\alpha \textbf{m}$ as part of an EM procedure. E.g. in \cite{Asuncion:2009:SIT:1795114.1795118}, the (re-)estimation of $\alpha \textbf{m}$ is based on an initially computed topic model $\Phi_1$. The updated $\alpha \textbf{m}$ can in turn be used to compute an updated model $\Phi_2$ (while using $\Phi_1$ as a starting point to compute $\Phi_2$) an so forth. After several iterations of such alternating steps, the models $\Phi_i$ as well as $\alpha \textbf{m}$ converge. \cite{minka:2000:tr} provides a theoretical basis for the estimation of Dirichlet parameters via sample distribution data. In case of $\alpha \textbf{m}$, these are (samples of) estimated distributions $p(T|d)$ as computed along with an intermediate model $\Phi_i$. To do so, \cite{Asuncion:2009:SIT:1795114.1795118} leverage Equation 55 from \cite{minka:2000:tr} in the EM procedure and coined for this particular E-step the name ``Minka's update''. Minka's update can be implemented under a symmetrical as well as under an asymmetric $\alpha$. \end{itemize} Concerning $\beta$, there exists similar alternatives as for $\alpha \textbf{m}$. A related heuristic for a symmetrical $\beta$ from \cite{griffiths-steyvers:2004:pnas} is $\beta = 0.1$. \cite{conf/nips/wallachmm09} report that an \emph{asymmetric $\beta$ optimization offers worse predictive performance than its symmetrical counter part}, but they also stress \emph{the importance of the asymmetric $\alpha$ case} for topic model quality. Later, when comparing LDA against Topic Grouper in Section~\ref{sec:evaluation}, we refer to the heuristics for $\alpha$ and $\beta$ from \cite{griffiths-steyvers:2004:pnas} as ``\emph{LDA with Heuristics}''. We also include an optimization for an asymmetric $\alpha\textbf{m}$ combined with a symmetric $\beta$ optimization using Minka's update and call it ``\emph{LDA Optimized}''. We include both approaches in our evaluation as extreme variants for LDA hyper parametrization: the former one being straight forward and efficient; the latter one offering higher predictive performance but also incurring substantial computational overhead due to intertwined approximation procedures. We use Gibbs sampling according to \cite{griffiths-steyvers:2004:pnas} in order to compute intermediate topic models $\Phi_i$ as described above and a final model, respectively $\Phi$.\footnote{Although intricate, details on hyper parameter settings matter: Some publications compare approaches to LDA but for example, leave it unclear whether $\alpha$ is kept symmetric or if it is optimized. E.g., \cite{tan-ou:2010:iscslp} report that ``basic LDA fails'' to successfully learn a solution for the kind of data we use in Section \ref{syn_data}. In comparison, we found that LDA succeeds in this case if its hyper parameters are set accordingly.} \subsubsection{Hierarchical Topic Models} Traditional topic models create flat topics; however, it may be more appropriate to have a hierarchy comprising multiple levels of super-topics and increasingly specialized sub-topics. To address this, topic model extensions based on trees and directed acyclic graphs have been proposed. One of the early attempts towards hierarchical topic models is \cite{hofmann:1999:ijcai}'s Cluster Abstraction Model (CAM), using an instance EM with annealing: Leaf nodes of a hierarchy are generated first via probabilistic soft clustering of documents. Inner nodes form latent sources of each word occurrence in a document such that a respective inner node is the ancestor of a leaf cluster in which the document is placed. The latent sources are subject to probabilistic modeling based on the hierarchy's leaves. Experiments indicate that top probability words in inner nodes form topical abstractions of the document clusters they subsume. \cite{segal-koller-ormoneit:2002:nips}'s \emph{Probabilistic Abstraction Hierarchies} (PAH) is another model based on the EM algorithm: it jointly optimizes cluster assignment, class-specific probabilistic models (CPMs) which are taxonomy nodes and the taxonomy structure. The latter two are globally optimized. The authors state that ``data is generated only at the leaves of the tree, so that a model basically defines a mixture distribution whose components are the CPMs at the leaves of the tree.'' They offer a brief evaluation including a predictive performance comparison of PAH with hierarchical clustering on gene expression data. \cite{blei-etal:2003:nips} discuss an extension of the ``Chinese restaurant process'' (CRP) from \cite{me22}: Their so-called ``\emph{nested Chinese restaurant process}'' (nCRP) allows for inferring hierarchical mixture models while permitting uncertainty about branching factors. Based on the nCRP, the authors propose \emph{Hierarchical LDA} (hLDA) to estimate topic trees of a given depth $L$. Documents are thought to be generated by first choosing a path of length $L$ along a tree and then mixing the document's topics via the chosen path where each path node represents a topic to be inferred. The corresponding document-topic distribution is subject to a Dirichlet distribution with prior $\alpha$. Under hLDA, higher level topics tend to be common across many documents, but do not necessarily form semantic generalizations of lower level topics. I.e., the model tends to push stop words and function towards the root of tree and rather domain-specific words towards the leaves. Besides $L$ and $\alpha$, hLDA requires a prior $\gamma$ affecting the branching factor of estimated trees and a prior $\eta$, which is equivalent to $\beta$ under LDA. The \emph{Hierarchical Dirichlet Process} (HDP) by \cite{teh-etal:2006:jamstatassoc} is a framework for two or more layered Dirichlet processes (DPs), where a first-level DP produces the parameters for $J$ second level DPs which in turn create mixture components to explain $J$ groups of data. A merit of the HDP is that the number of mixture components on the second level must not be set while still enabling a degree of sharing of mixture components between the groups. E.g. with regard to topic modeling, the authors apply the HDP in order to infer the number of \emph{flat} topics on a small-sized document collection along with a respective topic model. The HDP still mandates hyper parameters similar to $\alpha$ and $\beta$ under LDA. \cite{wang-paisley-blei:2011:jmlr} present a faster inference algorithm for HDP, which scales up to larger dataset sizes. The \emph{Pachinko Allocation Model} (PAM) from \cite{wei-mccallum:2006:icml} is a hierarchical topic model based on multiple Dirichlet processes. The PAM requires a directed acyclic graph (DAG) as a prior, where leaf nodes correspond to words from the vocabulary, parents of leaf nodes correspond to \emph{flat}, word-based topics and other nodes represent mixture components over their children's mixture components. A topic for a word occurrence of a document is sampled by considering all paths from the root to the leaves' parents. Moreover, the mixture components of all inner nodes are subject to Dirichlet distributions. Due to this structure, higher level nodes in the graph form abstractions of topic mixtures across documents and therefore capture topic correlations. As respective super-topics represent mixes over topics, the authors do not offer a labeling scheme for them. Besides the basic graph structure, the PAM has similar hyper parameters as LDA including $\alpha$, $\beta$ and the number of word-based topics $n$. Furthermore, $\alpha$ forms of a set vectors, one for each inner node, which are estimated as part of PAM's inference process. The \emph{recursive Chinese Restaurant Process} (rCRP) from \cite{Kim:2012:MTH:2396761.2396861} is another extension of the CRP to infer hierarchical topic structures. In contrast to hLDA, the sampling of a document-topic distribution is generalized in a way that permits a document's topics to be drawn from the entire (hierarchical) topic tree, not just from a single path. Regarding document-topic assignments, the rCRP makes the drawing of topics deeper in the tree more unlikely and estimates the branching factor of a topic tree node similarly to a regular CRP. The topic-word distributions of a tree-node are controlled via a Dirichlet with a symmetrical prior $\beta^k$, where $\beta < 1$ and $k$ is the depth of the node. As the prior gets smaller with increasing depth, the resulting distributions get more peaked, which facilitates the production of more specific topic towards the leaves. A CRP based on a scalar prior $\alpha$ controls how words from a document are assigned to topics and another scalar prior $\gamma$ controls the inferred depth and branching factor of the topic tree under the rCRP. An experimental analysis and examples of inferred topics indicate that the approach alleviates well-known drawbacks of hLDA including the one mentioned above. The \emph{Nested Hierarchical Dirichlet Process} (nHDP) from \cite{6802355} is perhaps the most sophisticated approach to produce tree-structured topics on the basis of DPs: Based on \cite{blei-etal:2003:nips} it uses the nCRP to produce a global topic tree. Every document obtains its specific topic tree which is derived from the global tree via an HDP. Hence, the HDP ensures a degree of sharing of topics between documents and allocates document-level topics based on DPs associated with the nodes of the global topic tree. To sample of a word's topic from the document-level topic tree the nHDP descends through that tree and may stop at any node. Stopping or progressing is a random event based on node-related probabilities drawn from a beta distribution with hyper parameters $\gamma_1$ and $\gamma_2$. The approach also mandates a hyper parameter $\alpha$ for its basic nCRP and $\beta$ for document-level trees. The authors provide efficient inference procedures and offer impressive results on small as well as very large text datasets, where the vocabulary on the large datasets is reduced to about 8,000 words. An apparent commonality of the presented approaches is the need for hyper parameters---usually several scalars. This also holds for the \emph{Hierarchical Latent Tree Analysis} (HLTA) from \cite{liu-zhang-chen:2014:ecml} and the \emph{Hierarchical PAM} (HPAM) from \cite{mimno-li-mccallum:2007:icml}. An analyst applying a related approach may therefore struggle with its complexity and with setting the hyper parameters. Although some of the above-mentioned solutions scale up to large datasets, the resulting topic trees remain rather shallow. In contrast, Topic Grouper offers deep trees and requires no hyper parameters. Deeper tree nodes cover only small sets of words and tend to become more specific. The fact that word sets are disjunctive at every tree level may ease topic interpretation but it also imposes a limitation with regard to polysemic words. Related pros and cons will be addressed further in Sections \ref{viz} and \ref{sec:discussion}. \subsection{Evaluation Regimes} \label{perplexity} Since typically, there exists no ground truth regarding topic models, a well-established \emph{intrinsic} evaluation scheme is to compute the log probability for test documents $d \in D_{test}$ withheld from the training data. In this context, estimating (the logarithm of) $p(d|\Phi, \alpha \textbf{m})$ via an LDA topic model $\Phi$ with its Dirichlet prior $\alpha \textbf{m}$ is a non-trivial problem in itself. We follow \cite{wallach-etal:2009:icml}, who determine this quantity conceptually as follows: \begin{equation} \label{eq:ldadoc2est} p(d|\Phi, \alpha \textbf{m}) = \int p(\theta_d) \cdot \prod_{w \in V} (\sum_{t \in T} \Phi_t(w) \cdot \theta_d(t))^{f_d(w)} d\theta_d\ \textrm{with}\ \theta_d \sim Dirichlet(\alpha \textbf{m}) \end{equation} Note that apart from using $\Phi$ instead of $\phi$ from Section~\ref{sec:concepts}, Equation \ref{eq:ldadoc2est} and Equation \ref{eq:lda2} are the same. \cite{wallach-etal:2009:icml} also examine different approximation methods for Equation \ref{eq:ldadoc2est} and introduce their so-called ``left-to-right'' method. \cite{buntine2009} presents a refined and unbiased version of ``left-to-right'' named ``left-to-right sequential''. Regarding LDA, we report results based on the latter algorithm since it acts as a gold standard estimation for Equation \ref{eq:ldadoc2est} (see~\cite{buntine2009}). Like \cite{blei:2003:lda:944919.944937} and others we use \textit{perplexity} as a derived measure to aggregate the predictive power of $\Phi$ over $D_{test}$: \begin{equation} \label{eq:perplexity} perplexity(D_{test}) := \exp (- \sum_{d \in D_{test}} \log p(d|\Phi, \alpha \textbf{m}) / \sum_{d \in D_{test}} |d|). \end{equation} In doing so, only words from the training vocabulary $V$ are considered, such that the size of a test document is $|d| = \sum_{w \in V} f_d(w)$. An intrinsic evaluation alone does not guarantee that learned topics coincide with human intuition and interpretability. This is particularly important when topics are consumed by humans directly rather than being utilized as an intermediate step of a machine learning or natural language processing pipeline. \emph{Extrinsic} evaluations therefore resort to external resources to assess topic quality: for instance, \cite{DBLP:conf/nips/ChangBGWB09} describe two human experiments, one study on \emph{word intrusion} and another one on \emph{topic intrusion}, respectively. In the word intrusion task subjects are asked to identify which spurious (``intruder'') word was added to a topic in hindsight to ``pollute'' it. If subjects identify the intruder artificially injected by the experimenter, this is a sign that the other words making up the topic are of good quality. In the topic intrusion task, subjects are asked to identify a ``rogue topic'' that has been added to a document (i.e., topics that are not actually covered in a document). Regarding their setting the authors find that ``surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics''. \cite{Newman:2010:AET:1857999.1858011} experiment with word co-occurrence measures obtained via word statistics from WordNet, Wikipedia and the Google search engine. They combine related values as obtained from each pair of a topic's top words in order to compute \emph{topic coherence}, which they define as ``average semantic relatedness between a topic's words''. Several variants of resulting quality measures matched the expectation of human annotators on respective text collections, including pointwise mutual information (PMI). Other than in the intruder scenario, annotators had to rate the coherence of topics as obtained from the training phase. Building on this work, \cite{W13-0102} compare four similarity functions for the automatic evaluation of topic coherence, including the cosine similarity, Dice coefficient and Jaccard coefficient. While Newman \textit{et al.} use PMI to measure similarity between a topic's top words directly, Aletras and Stevenson first map each word of a topic to a vector of co-occurring words as computed via word statistics from Wikipedia. Afterwards, the similarity measures are applied to such word vectors in order to estimate a topic's coherence. An evaluation based on three document collections and envolving human judges shows that their approach performs better than using PMI directly. \cite{lau-newman-baldwin:2014:eacl} build on the work from \cite{DBLP:conf/nips/ChangBGWB09} and \cite{Newman:2010:AET:1857999.1858011} and also offer a good review of extrinsic evaluations for topic models. They use machine learning to automate the detection of intruder words and to automatically assess the degree of coherence of a topic, respectively. While they solved the latter task successfully, the former task posed problems. Maybe surprisingly, they find that ``the correlation between the human ratings of intruder words and observed coherence is only modest'' and give a plausible example-based explanation in their paper. This look at extrinsic evaluation methods indicates that they are manifold and that related research is still ongoing. We therefore rely on hold-out performance for now as a well-established and more standardized criterion. Numerous topic modeling contributions suggest that at least a reasonable hold-out performance is a necessary criterion also for semantically meaningful topic models. Evidence usually comes from reporting such performance results in conjunction with example topics as learned from a text collection covering general knowledge (e.g.~ see \cite{mimno-li-mccallum:2007:icml}, \cite{Kim:2012:MTH:2396761.2396861} or \cite{6802355}). We follow this scheme, but also leverage some simple \emph{synthetic datasets} in order to examine whether a modeling approach is able to recover \emph{the true topics} governing that dataset. \section{Topic Grouper} \label{sec:theory} \subsection{Model} \label{basics} Let $T(n) = \{t\ |\ t \subseteq V\} $ be a (topical) partitioning of $V$ such that $s \cap t = \emptyset$ for any $s, t \in T(n)$, $\bigcup_{t \in T(n)} t = V$ and $|T(n)| = n$. Further, let the \emph{topic-word assignment} $t(w)$ be the topic of a word $w$ such that $w \in t \in T(n)$. Note that in the following, we also make use of the variables $D$, $V$, $f_d(w)$ and $\Phi$ as specified in Section \ref{sec:concepts}. Our principal goal is to find an \emph{optimal} partitioning $T(n)$ for each $n$ along \[ argmax_{T(n)} q(T(n)), \textrm{with}\] \[ q(T(n)) := \prod_{d \in D} \prod_{w \in V, f_d(w) > 0} \left( p(w | t(w)) \cdot p(t(w) | d) \right) ^{f_d(w)}.\] The idea is that each document $d \in D$ is considered to be \emph{generated} via a simple stochastic process where a word $w$ in $d$ occurs by \begin{itemize} \item first sampling a topic $t$ according to a probability distribution $p(t | d)_{t \in T(n)}$, \item then sampling a word from $t$ according to the topic-word distribution $p(w | t)_{w \in V}$ \end{itemize} and so, the total probability of generating $D$ is proportional to $q(T(n))$. The optimal partitioning consists of $n$ pairwise disjunctive subsets of $V$, whereby each subset is meant to represent a topic. By definition every word $w$ must be in exactly one of those sets. This may help to keep topics more interpretable for humans because they do not overlap with regard to their words. On the other hand, polysemic words can only support one topic, even though it would be justified to keep them in several topics due to multiple contextual meanings. Note that the approach considers a solution for every possible number of topics $n$ ranging between $|V|$ and one. To further detail our approach, we set \begin{itemize} \item $f(w) := \sum_{d \in D} f_d(w) > 0$, since otherwise $w$ would not be in the vocabulary, \item $|d| := \sum_{w \in V} f_d(w) > 0$, since otherwise the document would be empty, \item $f_d(t) := \sum_{w \in t} f_d(w)$ be the topic frequency in a document $d$ and \item $f(t) := \sum_{w \in t} f(w) = \sum_{d \in D} f_d(t)$ be the number of times $t$ is referenced in $D$ via some word $w \in t$. \end{itemize} Concerning $q(T(n))$ we use maximum likelihood estimations for $p(t(w) | d)$ and $p(w | t(w))$ based on $D$: \begin{itemize} \item $p(t(w) | d) \approx f_d(t(w)) / |d|$, which is $> 0$ if $f_d(w) > 0$, \item $p(w | t(w)) \approx f(w) / f(t(w))$, which is always $> 0$ since $f(w) > 0$. \end{itemize} Unfortunately, constructing the optimal partitionings $\{ T(n)\ |\ n = 1\dots|V| \}$ is computationally hard. \emph{We suggest a greedy algorithm that constructs suboptimal partitionings instead}, starting with $T(|V|) := \{ \{ w \}\ |\ w \in V \}$ as step $i = 0$. At every step $i = 1\dots|V| - 1$ the greedy algorithm joins two different topics $s,t \in T(|V| - (i - 1))$ such that $q(T(|V| - i))$ is maximized while $T(|V| - i) = \left( T(|V| - (i - 1)) - \{s, t\} \right) \cup \{ s \cup t \}$ must hold. Essentially, this results in an \emph{agglomerative clustering approach, where topics, not documents, form respective clusters}. For efficient computation we first rearrange the terms of $q(T(n))$ with a focus on topics in the outer factorization: \[ q(T(n)) = \prod_{t \in T(n)} \prod_{d \in D, f_d(t) > 0} \left( p(t | d)^{f_d(t)} \cdot \prod_{w\in t}p(w | t)^{f_d(w)}\right) \] The rearrangement relies on the fact that every word belongs to exactly one topic and enables the ``change of perspective'' towards topic-oriented clustering. We maximize $\log q(T(n))$ instead of $q(T(n))$ which is equivalent with respect to the argmax-operator. This leads to \[ \log q(T(n)) = \sum_{t \in T(n)} \sum_{d \in D, f_d(t) > 0} (f_d(t) \cdot \log p(t | d) + \sum_{w\in t} f_d(w) \cdot \log p(w | t)) \approx \sum_{t \in T(n)} h(t) \] with the maximum likelihood estimation \begin{equation} \label{eq:hnt} h(t) := \sum_{d \in D, f_d(t) > 0} f_d(t) \cdot (\log f_d(t) - \log |d|) + \sum_{w\in t} f(w) \cdot \log f(w) - f(t) \cdot \log f(t). \end{equation} Using these formulas the best possible join of two (disjunctive) topics $s, t \in T(n)$ results in $T(n - 1)$ with \[ \log q(T(n - 1)) \approx \log q(T(n)) + \Delta h_n, \] \begin{equation} \label{eq:deltahn} \Delta h_n := max_{s,t \in T(n)} \Delta h(s,t)\ \textrm{and} \end{equation} \begin{equation} \label{eq:deltah} \Delta h(s,t) := h(s \cup t) - h(s) - h(t). \end{equation} From the perspective of clustering procedures $-\Delta h(s,t)$ is the cluster distance between $s$ and $t$. Note though, that it does not adhere to standard distance axioms. \subsection{Joining Two Topics $s$ and $t$} Considering the resulting algorithm, we can reuse $h(s)$ and $h(t)$ from prior computation steps in order to compute $h(s \cup t)$ efficiently: Regarding expression (\ref{eq:hnt}) from above, let $i(t) := \sum_{w\in t} f(w) \cdot \log f(w)$. We have $f_d(s \cup t) = f_d(s) + f_d(t)$, $f(s \cup t) = f(s) + f(t)$ and $i(s \cup t) = i(s) + i(t)$, and so \begin{equation} \label{eq:efficienth} \begin{aligned} h(s \cup t) = \sum_{d \in D, f_d(s) + f_d(t) > 0} (f_d(s) + f_d(t)) \cdot \left( \log (f_d(s) + f_d(t)) - \log |d|\right) + \\ i(s) + i(t) - (f(s) + f(t)) \cdot \log (f(s) + f(t)). \end{aligned} \end{equation} The terms $i(u)$ and $f(u)$ with $u = s, t$ will have been computed already during the prior steps of the resulting algorithm, i.e. when $t$ and $s$ were generated as topics. Thus, the computation of all sums over words $w$ can be avoided with respect to $h(s \cup t)$. This is essential for a reasonable runtime complexity. \subsection{Initialization} \label{initialization} During initialization, the resulting algorithm generates all one-word topics $t \in T(|V|)$. Given $t = \{ w \}$ we have \begin{equation} \label{eq:init1} h(\{w\}) = \sum_{d \in D, f_d(w) > 0} f_d(w) \cdot (\log f_d(w) - \log |d|). \end{equation} The algorithm also computes the best possible join partner $s = \{ v \}$ for some $t = \{ w \}$ and so \begin{equation} \label{eq:init2} \begin{aligned} h(\{ v, w \}) = \sum_{d \in D, f_d(v) + f_d(w) > 0} (f_d(v) + f_d(w)) \cdot (\log (f_d(v) + f_d(w)) - \log |d|) + \\ i(\{ v \}) + i(\{ w \}) - (f(v) + f(w)) \cdot \log (f(v) + f(w)). \end{aligned} \end{equation} The first sum in this expression is problematic because one would have to iterate over the document set to compute it. Using an inverted index, one can avoid looking at documents with $f_d(v) = 0$ and $f_d(w) = 0$. \subsection{Algorithm and Complexity} \label{complexity} Topic Grouper can be implemented via adaptations of standard agglomerative clustering algorithms: Listing \ref{lst:ehac} presents a related variant of the \emph{efficient hierarchical agglomerative clustering} (EHAC) taken from \cite{manning:2008:iir:1394399}, which manages a map of priority queues in order to represent evolving clusters during the agglomeration process. EHAC's time complexity is in $O(k^2 \log k)$ and its space complexity in $O(k^2)$ with $k$ being the initial number of clusters. However, this implies that the cost of computing the distance between two clusters is in $O(1)$. In the case of Topic Grouper the latter cost is in $O(|D|)$ instead, because one must compute the value of $h$ from Equation \ref{eq:efficienth}. The factor ``$\log k$'' from EHAC's original time complexity accounts for access to priority queue elements -- in the case of Topic Grouper this is dominated by the cost to compute $h$-values. Putting it together, the time complexity for Listing \ref{lst:ehac} is on the order of $|V|^2 \cdot |D|$ and its space complexity is in $O(|V|^2)$. In case of text, one may further assume that Heaps' Law holds (\cite{book/heaps}): Without a fixed limit on the vocabulary, we then have about $|V|^2 \sim |D|$, leading to a simplified time complexity estimation for Topic Grouper roughly on the order of $|D|^2$. The stated space complexity, $O(|V|^2)$, can be problematic if the vocabulary is large. We devised an alternative clustering algorithm, MEHAC, whose space complexity is in $O(|V|)$ but its \emph{expected} time complexity is still in $O(|V|^2 \cdot |D|)$. A drawback of MEHAC is that in practice, it incurs a higher constant computation time factor than EHAC. So, given sufficient memory, the EHAC variant is preferable. MEHAC is detailed in Appendix \ref{mehac}. Appendix \ref{pperformance} highlights the practical performance of both algorithms based on example datasets. \newpage \lstset{numbers=left,morekeywords={new, foreach, var, foreach, procedure, print, insert, remove, add, null, while, true, false, clear},basicstyle=\tiny,escapeinside={(*}{*)}} \begin{lstlisting}[mathescape=true,caption={Variant of Efficient Agglomerative Clustering (EHAC) for Topic Grouper\\},label=lst:ehac] // (*\textbf{Input: $V, D, f_d(w)$ and $f(w)$ according to Section \ref{basics}}*) // (*\textbf{Output: Relevant changes of T -- the current set of topics -- printed out.}*) // (*\textbf{Global variables}*) var T := $\emptyset$; // Current set of topics, topics are assumed to be fully ordered (no matter how) // Map of priority queues of topics. Each topic s from T acts as a key and maps to one queue. // Moreover, each queue's topics t are sorted in descending order on the basis of $\Delta h(s,t)$: var pq[]; // Map for parameters from Equation (*\ref{eq:efficienth}*), topics from T are used as keys: var h[], f[], i[], fd[]; // (*\textbf{Initialization step $i$ = 0}*) foreach $w \in V$ { // Filling T var t := { $w$ }; insert t into T; h[t] := $h(t)$ according to Equation (*\ref{eq:init1}*) foreach $d \in D$ { fd[(t,d)] := $f_d(w)$; } f[t] := $f(w)$; i[t] := $f(w) \cdot \log f(w)$; } print T; foreach t $\in$ T { pq[t] := new PriorityQueue(); } foreach s $\in$ T { // Computing initial join partners foreach t $\in$ T with t > s { var u := s$\cup$t; h[u] := $h(u)$ according to Equation (*\ref{eq:init2}*); var $\Delta$h := h[u] - h[s] - h[t]; add t to pq[s] on the basis of $\Delta$h; add s to pq[t] on the basis of $\Delta$h; } } // (*\textbf{Steps i $>$ 0 to join topics}*) while (|T| > 1) { var s := $argmax_{\textrm{r} \in \textrm{T}}$ pq[r].peek.$\Delta$h; // Determine queue pq[s] with best head on the basis of $\Delta h$. var t := pq[s].pull; // Remove head from pq[s] and return it. var u := s$\cup$t; remove s from T; remove t from T; insert u into T; print T; // Update data structures: foreach $d \in D$ { fd[(u, d)] := fd[(s,d)] + fd[(t,d)]; clear fd[(s,d)], fd[(t,d)]; } f[u] := f[s] + f[t]; i[u] := i[s] + i[t]; clear pq[s], pq[t], h[s], h[t], f[s], f[t], i[s], i[t]; foreach v $\in$ T { remove s from pq[v]; remove t from pq[v]; } // Update join partners for u: foreach r $\in$ T with r $\neq$ u { var v = r$\cup$u; h[v] := $h(v)$ according to Equation (*\ref{eq:efficienth}*); var $\Delta$h := h[v] - h[r] - h[u]; add v to pq[u] on the basis of $\Delta$h; add u to pq[v] on the basis of $\Delta$h; } } \end{lstlisting} \section{Experiments} \label{sec:evaluation} \subsection{Synthetic Data} \label{syn_data} This section provides a first evaluation of Topic Grouper using simple synthetically generated datasets. As the true topics $S$ are known (i.e. having gold data), this allows us to consider \emph{error rate} as a quality measure and to examine some basic qualities of our approach: The idea is to compare a model $\Phi$ against the \emph{true} topic-word distributions used to generate a dataset. The following definition of error rate $err$ assumes that the perfect number of topics is already known, such that $|T| := |S|$ is preset for training. The order of topics in topic models is unspecified, so we try every \emph{bijective} mapping $\pi : T \rightarrow S$ when comparing a computed model $\Phi$ with a true model $\tilde{p}(V|S)$ and favor the mapping that minimizes the error: \[ err := \min_{\pi} \frac{1}{2 |T|} \sum_{t \in T} \sum_{w \in V} |\Phi_t(w) - \tilde{p}(w| \pi(t))|.\] The measure is designed to range between 0 and 1, where 0 is perfect. Considering a mapping $\pi$, every topic may contribute equally to lower the error rate. The factor $1/2$ avoids double counting, since a quantity $\Phi_t(w)$ exceeding $\tilde{p}(w| \pi(t))$ will be missing for other words $w'$, i.e. $\Phi_t(w')$ will then be too low. \subsubsection{Datasets According to \cite{tan-ou:2010:iscslp}} \label{twcdataeval} We use a simple synthetic data generator as introduced in \cite{tan-ou:2010:iscslp}: It is based on $|V| = 400$ (artificial) words equally divided into $4$ \emph{disjoint} topics $S = \{ s_1, \ldots, s_4 \}$. The words are represented by numbers, such that $0\ldots99$ belongs to $s_1$, $100\ldots 199$ to $s_2$ and so on. Concerning the 100 words of a topic $s_i$, the topic-word distribution $\tilde{p}(w|s_i)_{w \in V}$ is drawn independently for each topic from a Dirichlet distribution with a symmetric prior $\tilde{\beta} = 1/100$, such that $\sum_{w = (i - 1) \cdot 100}^{i \cdot 100 - 1} \tilde{p}(w|s_i) = 1$. A resulting dataset holds 6,000 documents with each document consisting of 30 word occurrences. A document-topic distribution $\tilde{p}(w|d)_{w \in S}$ is drawn independently for each document via a Dirichlet with the prior $\tilde{\alpha} \tilde{\textbf{m}} = (5, 0.5, 0.5, 0.5)^\top$, where topic 1 with $\tilde{\alpha} \tilde{\textbf{m}_1} = 5$ is meant to represent a typical ``stop word topic'', which is more likely than other topics. To generate a word occurrence for a document $d$, the occurrence's topic $s_i$ is first drawn via $\tilde{p}(S|d)$. Then, the word is drawn via $\tilde{p}(V|s_i)$. For the results from below, we generated two random datasets (``1'' and ``2'') this way, where each has its specific topic-word distributions $\tilde{p}(V|s_i)$. \subsubsection{Results} Figure \ref{ldasymresult} shows the error rate of LDA as well as Topic Grouper for the two datasets from Section \ref{syn_data}. The values were produced via a 75\% random sub-sample or taken from each dataset for training, respectively. The remaining 25\% were used as test data in order to compute perplexity --- corresponding results can be found in Appendix \ref{perplexity_syn}. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=$\alpha \textbf{m}_1$, ylabel=Error Rate, ymax=1, height=8cm ] \addplot[smooth,color=black,mark=|] table [x=alpha1, y=errorRate, col sep=semicolon] {./csv/twc/TWCUnigramPerplexityChangeAlphaExp.csv}; \addlegendentry{Unigram 1 and 2} \addplot[smooth,color=brown,mark=triangle*,error bars/.cd, y dir = both, y explicit] table [x=alpha1, y=errorRateAvg, y error=errorRateStdDev, col sep=semicolon] {./csv/twc/TWCPLSAPerplexityChangeAlphaExp.csv}; \addlegendentry{pLSI 1} \addplot[smooth,color=red,mark=o,error bars/.cd, y dir = both, y explicit] table [x=alpha1, y=errorRateAvg, y error=errorRateStdDev, col sep=semicolon] {./csv/twc/TWCLDAPerplexityChangeAlphaExp.csv}; \addlegendentry{LDA 1} \addplot[smooth,color=orange,mark=x,error bars/.cd, y dir = both, y explicit] table [x=alpha1, y=errorRateAvg, y error=errorRateStdDev, col sep=semicolon] {./csv/twc/TWCLDAPerplexityChangeAlphaExp2.csv}; \addlegendentry{LDA 2} \addplot[smooth,color=blue,mark=*] table [x=alpha1, y=errorRate, col sep=semicolon] {./csv/twc/TWCTGPerplexityChangeAlphaExp.csv}; \addlegendentry{Topic Grouper 1 and 2} \addplot[smooth,color=green,mark=square*] table [x=alpha1, y=errorRate, col sep=semicolon] {./csv/twc/TWCPerfectPerplexityChangeAlphaExp.csv}; \addlegendentry{Perfect 1 and 2} \end{axis} \end{tikzpicture} \caption{Error Rate Depending on $\alpha \textbf{m}_1$ for Two Datasets Generated According to \cite{tan-ou:2010:iscslp}} \label{ldasymresult} \end{center} \end{figure} Regarding LDA, the depicted values are averaged across 50 runs per data point, whereby the random seed for the Gibbs sampler was changed for every run. The symmetric hyper parameter $\beta$ was optimized using Minka's update (see Section \ref{ldahparam}). LDA's $\alpha \textbf{m}$ changes along the X axis such that $\alpha = \tilde{\alpha} = 6.5$ and $\textbf{m}_2 = \textbf{m}_3 = \textbf{m}_4$ always hold. The results stress the importance of hyper parameter choice for model quality under LDA with regard to $\alpha \textbf{m}$. This conforms to respective findings from \cite{conf/nips/wallachmm09}. Note that a symmetric $\alpha \textbf{m}$ with $\alpha \textbf{m}_1 = 1.625$ fails to deliver low error rates. LDA performs better as the $\alpha \textbf{m}$ approaches the true $\tilde{\alpha} \tilde{\textbf{m}}$, which governs the datasets. \emph{In this setting, Topic Grouper delivers good error rates right away.} As its results are independent of $\alpha \textbf{m}$ and $\beta$ but also deterministic, they are included as a horizontal line. We also added results for pLSI as an alternative approach introduced by \cite{hofmann:1999:uai} (where dataset 2 is omitted for visual clarity): pLSI attains only mediocre and volatile results, heavily depending on its random initialization values. We therefore excluded it from evaluations on other datasets from below. The unigram model simply sets $\Phi_t(w) := f(w) / \sum_{w \in V} f(w)$ for any $t$. For completeness and for reference, we finally added a theoretically ``perfect model'': It determines the topic-word probabilities on the basis of the training data while using the perfect topic-word assignment as known from data generation. It is worth mentioning that we ran additional experiments with many other configurations for the data generator from Section \ref{twcdataeval}: E.g., we varied the number of topics, words per document, vocabulary size, number of documents and $\tilde{\alpha} \tilde{\textbf{m}}$ but kept up the unique topic-word assignment as part of the generation. Such obtained results were analogous to the reported ones. We also compared how LDA's and Topic Grouper's error rates drop with an increasing number of training documents generated according to Section \ref{twcdataeval}. In favour of LDA, we set $\alpha \textbf{m} := \tilde{\alpha} \tilde{\textbf{m}}$. With the number documents ranging between 500 and 10.000 both approaches attained about similar performance (not depicted). Figure \ref{likelihoodresult} illustrates how $\Delta h_n$ from Equation \ref{eq:deltahn} can be used as a suitable measure to determine a good number of topics in the context of Topic Grouper. Here, the sudden drop of $\Delta h_n$ at $n = 3$ means that at least four topics are required to model the data accordingly. A similar approach is often taken for LDA: E.g. \cite{griffiths-steyvers:2004:pnas} visualize the log probability $\log p(D)$ for the training dataset $D$ under LDA where the number of topics $n$ is varied. While under LDA a separate training run is required for every $n$, Topic Grouper assesses all potential values of $n$ between $|V|$ and 1 within a single run. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[scaled ticks=false, tick label style={/pgf/number format/fixed}, axis y line*=left, x dir=reverse, xlabel=Number of Topics $n$, ylabel=$\Delta h_n$, width=10cm, height=6cm, legend style={at={(0.5,0.5)},anchor=east} ] \addplot[color=blue,mark=o] table [x=ntopics, y=improvement, col sep=semicolon] {./csv/twc/TWCLikelihoodTG.csv}; \label{plot_one} \addlegendentry{} \end{axis} \begin{axis}[scaled ticks=false, tick label style={/pgf/number format/fixed}, axis y line*=right, axis x line=none, x dir=reverse, ylabel=$\Delta h_n / \Delta h_{n+1}$, width=10cm, height=6cm, legend style={at={(0.5,0.5)},anchor=east} ] \addlegendimage{/pgfplots/refstyle=plot_one}\addlegendentry{$\Delta h_n$} \addplot[color=red,mark=x] table [x=ntopics, y=improvementratio, col sep=semicolon] {./csv/twc/TWCLikelihoodTG.csv}; \addlegendentry{$\Delta h_n / \Delta h_{n-1}$} \end{axis} \end{tikzpicture} \caption{$\Delta h_n$ from Equation \ref{eq:deltahn} Depending on the Number of Topics $n$ for a Dataset Generated According to \cite{tan-ou:2010:iscslp}} \label{likelihoodresult} \end{center} \end{figure} \subsection{Real-World Datasets} \label{realworlddata} This section reports perplexity results for two retail datasets and two text-based datasets. The log probability for test documents is estimated as described in Section \ref{perplexity} and perplexity is computed via Equation \ref{eq:perplexity}. \emph{Fortunately this approach can be applied to models computed via LDA and Topic Grouper alike}: In the latter case, we set $\Phi_t(w) := \delta_{t(w), t} \cdot f(w) / f(t)$ with $t(w)$ being the topic to which $w$ belongs and $f(w) / f(t(w))$ being the maximum likelihood estimate.\footnote{$\delta$ is the Kronecker symbol.} (This implies that $\Phi_t(w) = 0$ if $t(w) \neq t$.) As there is no predefined prior $\alpha \textbf{m}$ under Topic Grouper, we simply set $\textbf{m}_t = f(t) / \sum_t f(t)$ -- the maximum likelihood estimate for $p(t)$. Finally, we determine a suitable value for the concentration parameter $\alpha$ via an interval search with the optimization goal being low perplexity on the training data. The such obtained $\alpha \textbf{m}$ is used during test. We believe the approach is fair because \emph{it focuses on the quality of a topic model} $\Phi$ regardless of its underlying training method. Due to $\theta$'s Dirichlet, it also avoids (additional) smoothing schemes for Topic Grouper. \subsubsection{Retailing} Regarding retailing, a shopping basket or an order are equivalent to a document. Articles correspond to words from a vocabulary and item quantities transfer to word occurrence frequencies in documents. In this context, topics represent groups of articles as typically bought or ordered together. Therefore, inferred topic models may be leveraged to optimize sales-driven catalog structures, to develop layouts of product assortments (\cite{CHEN2007976}) or to build recommmender systems (\cite{Wang:2011:CTM:2020408.2020480}). The ``Online Retail'' dataset is a ``transnational dataset which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based ... online retail'' obtained from the UCI Machine Learning Repository (\cite{Chen2012}).\footnote{See \url{https://archive.ics.uci.edu/ml/datasets/Online+Retail}.} We performed data cleaning by removing erroneous and inconsistent orders. Item quantities are highly skewed with about 5\% above 25, some reaching values of over 1,000. This is due to a mixed customer base including consumers and wholesalers. We therefore excluded all order items with quantities above 25 to focus on small scale (parts of) orders. We randomly split such preprocessed orders into 90\% training and 10\% test data, keeping only articles that were ordered at least 10 times in the training data. The resulting training dataset covers $|V| = 3,464$ articles, $|D| = 17,086$ orders and 427,150 order items. The resulting average sum of item quantities per order is about 154. Figure \ref{onlineretailresult} shows that optimized LDA and Topic Grouper are closely matched beyond 80 topics with optimized LDA performing slightly better. In comparison, the performance of LDA with heuristics begins to degrade at 80 topics. Topic Grouper is competitive although its underlying topic model is more restrained (as each article or word belongs to exactly one topic, respectively). \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=Number of Topics, ylabel=Perplexity, width=10cm, height=7cm ] \addplot[smooth, color=red, mark=o] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/OnlineRetail/OnlineRetailLDAPerplexityExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[smooth, color=green, mark=x] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/OnlineRetail/OnlineRetailLDAPerplexityExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[smooth,color=blue, mark=*] table [x=topics, y=perplexity, col sep=semicolon] {./csv/perplexity/OnlineRetail/OnlineRetailTGLRPerplexityExperiment.csv}; \addlegendentry{Topic Grouper} \end{axis} \end{tikzpicture} \caption{Perplexity on the Preprocessed Online Retail Dataset} \label{onlineretailresult} \end{center} \end{figure} The ``Ta Feng'' dataset was published on the ACM RecSys Wiki\footnote{See \url{http://www.recsyswiki.com}.}: It captures shopping baskets of consumers from a Taiwanese grocery store collected over four months between 2000 and 2001. It covers 23,812 articles and 119,578 shopping baskets but the average number of goods in a basket is only about 9.5 with about 6.8 different articles. For data cleaning, we removed unlikely item quantities above 50 from shopping baskets. Again we split the left-over data based on a 90\% to 10\% ratio, keeping only articles that were bought at least 20 times in the training data. This way we ended up with $|V| = 7,893$ articles left for training. Figure \ref{tafengresult} shows the respective perplexity results. LDA Optimized clearly dominates but Topic Grouper surpasses LDA with Heuristics at about 180 topics. LDA with Heuristics fails at higher topic numbers due to inappropriate hyper parameter setting. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=Number of Topics, ylabel=Perplexity, width=10cm, height=7cm ] \addplot[smooth, color=red, mark=o] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/TaFeng/TaFengLDAPerplexityExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[smooth, color=green, mark=x] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/TaFeng/TaFengLDAPerplexityExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[smooth,color=blue, mark=*] table [x=topics, y=perplexity, col sep=semicolon] {./csv/perplexity/TaFeng/TaFengTGLRPerplexityExperiment.csv}; \addlegendentry{Topic Grouper} \end{axis} \end{tikzpicture} \caption{Perplexity on the Preprocessed Ta Feng Dataset} \label{tafengresult} \end{center} \end{figure} \subsubsection{Text} The first of two examples is a subset of the TREC AP corpus containing 20,000 newswire articles.\footnote{See \url{https://catalog.ldc.upenn.edu/LDC93T3A}.} We performed (Porter) stemming and kept every stem that occurs at least five times in the dataset. Moreover, we removed all tokens containing non-alphabetical characters or being shorter than three characters. Again we split the left-over documents on a 90\% to 10\% basis and only kept words occurring at least five times in the training data. This led to $|V| = 25,047$ words left for training. Figure \ref{aplresult} shows related results: Here, Topic Grouper performs generally worse than LDA but attains similar performance as LDA with Heuristics beyond about 200 topics. Despite these differences we found that related topics generated by Topic Grouper are reasonably conclusive and coherent. We will elaborate on this with regard to the AP Corpus in Section \ref{viz}. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=Number of Topics, ylabel=Perplexity, width=10cm, height=7cm ] \addplot[smooth, color=red, mark=o] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/APLarge/APLDAPerplexityExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[smooth, color=green, mark=x] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/APLarge/APLDAPerplexityExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[smooth,color=blue, mark=*] table [x=topics, y=perplexity, col sep=semicolon] {./csv/perplexity/APLarge/APTGPerplexityExperiment.csv}; \addlegendentry{Topic Grouper} \end{axis} \end{tikzpicture} \caption{Perplexity on the Preprocessed AP Dataset} \label{aplresult} \end{center} \end{figure} The NIPS dataset is a collection of 1,500 research publications from the Neural Information Processing Systems Conference. We used a preprocessed version as is of the dataset from the UCI Machine Learning Repository.\footnote{See \url{https://archive.ics.uci.edu/ml/datasets/Bag+of+Words}.} It was already tokenized and had stop words removed but no stemming was performed. We split the document set on a 90\% to 10\% basis and only kept words occurring at least five times in the training data. This way we ended up with $|V| = 8,801$ words left for training. Figure \ref{nipsresult} shows that LDA Optimized performs best but Topic Grouper outperforms LDA with Heuristics beyond about 70 topics. Together, the results of the four datasets suggest that Topic Grouper should be considered as an option especially when words incur little ambiguity. E.g., this tends to be the case for the retail examples, where words represent articles without an aspect of polysemy. Also, Topic Grouper tends to outperform LDA with Heuristics at a larger number of topics. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=Number of Topics, ylabel=Perplexity, width=10cm, height=7cm ] \addplot[smooth, color=red, mark=o] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/NIPS/NIPSLDAPerplexityExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[smooth, color=green, mark=x] table [x=topics, y=perplexityLR, col sep=semicolon] {./csv/perplexity/NIPS/NIPSLDAPerplexityExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[smooth,color=blue, mark=*] table [x=topics, y=perplexity, col sep=semicolon] {./csv/perplexity/NIPS/NIPSTGLRPerplexityExperiment.csv}; \addlegendentry{Topic Grouper} \end{axis} \end{tikzpicture} \caption{Perplexity on the Preprocessed NIPS Dataset} \label{nipsresult} \end{center} \end{figure} \subsection{Feature Reduction and Document Classification} \label{featurered} This section compares the abilities of LDA, Topic Grouper, \emph{Information Gain} (IG) and \emph{Document Frequency} (DF) regarding feature reduction for text classification. In the first two cases, the idea is to exchange word occurrences for topic occurrences and thus, to reduce feature space dimensionality from the vocabulary size $|V|$ to the number of topics $|T|$. In contrast, IG and DF attain feature reduction by dropping words from the vocabulary (\cite{Yang:1997:CSF:645526.657137}, \cite{Forman:2003:EES:944919.944974}). We chose Naive Bayes as a classification method since it lends itself well for all four approaches. Firstly, it allows for a straight-forward transfer from words to topics as will be shown below. Secondly, it does \emph{not} mandate additional hyper parameter settings such as \emph{Support Vector Machines} (SVMs), which would complicate the comparison and potentially incur bias. Moreover, approaches relying on a TF-IDF embedding (such as Roccio or SVM in \cite{Joachims1998}) are problematic with regard to LDA because DF and IDF are undefined for topics. Note that our goal is \emph{not} to show that topic models can generally reduce the word feature space without (much) loss of classification accuracy. This has already been demonstrated in \cite{blei:2003:lda:944919.944937}. Instead, \emph{we focus on the relative performance of the four feature reduction techniques}. Including IG and DF allows for a direct comparison between topic modeling and word selection methods. Let $C = \{ c_1, \ldots, c_m \}$ be the set of classes for the training documents $D$. We assume that the class assignments $l(d) \in C, d \in D$ are unique and known with regard to $D$. We define $D_c$ as the subset of training documents belonging to class $c$, so $D_c = \{ d \in D | l(d) = c \}$. When using topics, Naive Bayes determines the class of a test document $d_{test}$ via \[ argmax_{c \in C} \log p(c | d_{test}) \approx argmax_c \log (p(c) \cdot \prod_{t\in T} p(t|c)^{f_{d_{test}}(t)}).\] with $p(c)$ estimated from by means of $p(c) \approx |D_c| / |D|$. Regarding Topic Grouper, $f_{d_{test}}(t)$ and $p(t|c)$ can be estimated via the topic-word assignments $t(w)$ from Section \ref{basics}. In total, this results in the following classification formula for Topic Grouper: \[ argmax_c \log (|D_c| / |D|) + \sum_{t\in T} f_{d_{test}}(t) \cdot \log ((1 + \sum_{d \in D_c} f_d(t)) / (n + \sum_{d \in D_c} |d|)).\] The ``$1 +$'' and ``$n + $'' in the second $\log$-expression form a standard Lidstone smoothing accounting for potential zero probabilities of the estimated $p(t|c)$. Other than that, its practical effects are effect negligible. For best possible results under LDA, we estimate $f_{d_{test}}(t) \approx |{d_{test}}| \cdot p(t|d_{test}) $. In order to compute $p(t|d_{test})$ accurately, we resort to the so-called fold-in method: A topic-word assignment $z_i$ is sampled for every word occurrence $w_i$ in $d_{test}$ using Gibbs sampling. This involves the use of the underlying topic model $\Phi$ and leads to a respective topic assignment vector $\textbf{z}$ of length $|d_{test}|$. More details on this sampling method can be found in Section 3 of \cite{wallach-etal:2009:icml}. The procedure is repeated $S$ times leading to $S$ vectors $\textbf{z}^{(s)}$. Together, these results form the basis of \[ p(t|d_{test}) \approx 1/S \cdot \sum_{s=1}^S 1/ |d_{test}| \sum_{i=1}^{|d_{test}|} \delta_{\textbf{z}_i^{(s)},t}.\] Moreover, we estimate $p(t|c) \approx (\sum_{d \in D_c} p(t|d) \cdot |d|) / \sum_{d \in D_c} |d|$. In this case, an approximation of $p(t|d)$ is known from running LDA on the training documents. As known from \cite{Joachims1998} Naive Bayes is robust against a large number of features, i.e. words, and performs best without any feature reduction. So, one cannot hope for increasing classification accuracy but only for little loss in accuracy when transferring to an ever smaller number of topics. The results are also a rough indicator of how well topics coincide with a human classification scheme: If topics tended to cover many words across classes, the probabilities $p(t|c)$ would be less peaked and Naive Bayes' classification accuracy would suffer (more). We work with two popular datasets, namely ``Reuters 21578'' and ``Twenty News Groups'': \begin{itemize} \item Reuters 21578\footnote{See \url{http://www.daviddlewis.com/resources/testcollections/reuters21578/} (cited 2018-03-04).} is text collection of business news in English with more than 120 class labels, most of them rarely occurring, and 21,578 (partly unlabeled) documents. We chose the ten most frequent labels and kept all documents with exactly one class label. Moreover, we applied the so-called ModApte split, leading to 7,142 documents for training and 2,513 for test. We performed (Porter) stemming and kept every stem that occurs at least three times in the training data. This way, we ended up with a training vocabulary of 9,567 stemmed words excluding stop words. \item Twenty News Groups is a collection of newsgroup messages covering twenty areas of social discussion. We used a reworked version of the collection consisting of 18,846 documents each belonging to just one class.\footnote{See \url{http://qwone.com/~jason/20Newsgroups/ (cited 2018-03-04)}.} We applied a random split into training and test documents based on a 75\% to 25\% ratio. Again, we performed (Porter) stemming and kept every stem that occurs at least five times in the dataset. Moreover, we removed all tokens containing non-alphabetical characters or being shorter than three characters. This way, we ended up with a training vocabulary of 25,826 stemmed words excluding stop words. \end{itemize} Figures \ref{reutersclassresult} and \ref{twentyngclassresult} present classification accuracy as a function of the number topics or words, respectively, using micro averaging. Our findings confirm the impressive abilities of LDA for feature reduction as reported in \cite{blei:2003:lda:944919.944937} when applying hyper parameter optimization. Beyond 700 topics, the heuristic setting degrades LDA's performance. In accordance with \cite{Yang:1997:CSF:645526.657137} and \cite{Forman:2003:EES:944919.944974}, the results confirm that IG performs better than DF. The performance of Topic Grouper depends on the dataset and ranges below ``LDA Optimized'' but considerably above IG in Figure \ref{twentyngclassresult} whereas in Figure \ref{reutersclassresult} ``LDA Optimized'', IG and Topic Grouper are close above 200 topics or words, respectively. \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xmode=log, log ticks with fixed point, xlabel=Number of Topics $n$ Respectively Number of Selected Words, xmin=10, xmax=2000, ylabel=Accuracy, width=14cm, height=10cm, legend pos=south east ] \addplot[color=red, mark=o] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/ReutersLDAClassificationExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[color=green, mark=x] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/ReutersLDAClassificationExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[color=blue, mark=*] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/ReutersTGNaiveBayesExperiment.csv}; \addlegendentry{Topic Grouper} \addplot[color=orange, mark=square] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/ReutersVocabIGClassificationExperiment.csv}; \addlegendentry{IG} \addplot[color=brown, mark=triangle] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/ReutersVocabDFClassificationExperiment.csv}; \addlegendentry{DF} \end{axis} \end{tikzpicture} \caption{Micro Averaged Classification Accuracy of Naive Bayes on Reuters 21578 Depending on the Log Scaled Number of Features} \label{reutersclassresult} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture} \begin{axis}[ xmode=log, log ticks with fixed point, xlabel=Number of Topics $n$ Respectively Number of Selected Words, xmin=10, xmax=2000, ylabel=Accuracy, width=14cm, height=10cm, legend pos=south east ] \addplot[color=red, mark=o] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/TwentyNGLDAClassificationExperiment.csv}; \addlegendentry{LDA with Heuristics} \addplot[color=green, mark=x] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/TwentyNGLDAClassificationExperimentOpt.csv}; \addlegendentry{LDA Optimized} \addplot[color=blue, mark=*] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/TwentyNGTGNaiveBayesExperiment.csv}; \addlegendentry{Topic Grouper} \addplot[color=orange, mark=square] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/TwentyNGVocabIGClassificationExperiment.csv}; \addlegendentry{IG} \addplot[color=brown, mark=triangle] table [x=topics, y=microAvg, col sep=semicolon] {./csv/classification/TwentyNGVocabDFClassificationExperiment.csv}; \addlegendentry{DF} \end{axis} \end{tikzpicture} \caption{Micro Averaged Classification Accuracy of Naive Bayes on Twenty New Groups Depending on the Log Scaled Number of Features} \label{twentyngclassresult} \end{center} \end{figure} When applying topic modeling this way, an important point to consider is the computational overhead for model generation but also the feature reduction overhead for new documents at classification time: Once a Topic Grouper model is built, its use for feature reduction incurs minimal overhead: I.e., a word from a test document $d_{test}$ can be reduced in constant time via the topic-word assignment $t(w)$. Thus the total feature reduction cost for a test document remains on the order of $|d_{test}|$. In contrast, LDA requires the relatively complex fold-in computation of $p(t|d_{test})$ which is on the order of $S \cdot |d_{test}| \cdot n$ for a test document. Model generation for LDA tends to become computationally expensive, when the number of topics is high because it depends linearly on $|T|$. We experienced this when producing the results above about $n \geq 500$. In comparison, Topic Grouper's computation time remained moderate even for the Twenty New Groups dataset with $|V| > 25,000$. As noted before, Topic Grouper assesses all values for $n$ between $|V|$ and one within an single run. The latter \emph{allows to adjust the degree of feature reduction in hindsight} without the need for topic model recomputations. We believe that this favorable combination of qualities places Topic Grouper as a promising alternative to IG and DG with actual practical relevance. \subsection{Model Visualization and Inspection} \label{viz} Topic Grouper returns hierarchical topic models by design. The hierarchy of topics may be explored interactively assuming that larger topics form a kind of semantic abstraction of contained smaller topics. Much as under LDA, the meaning of a topic may be represented by its top-most frequent words on every containment level. Analyzing results this way may give users additional insight into the nature of a document collection's inherent topics. Figure \ref{screenshot} shows a screen shot of a simple tool that we built for this purpose. The upper half of the window allows for exploring the containment structure of topics via a hierarchy of folders. The lower half of the window displays a \textit{flat topic view}, which is a list of topics $T(n)$ as they occur together during a run of Topic Grouper according to Section \ref{basics}. The number $n$ can be changed interactively causing an instant update of the displayed topic list. Each topic from the list is displayed in one table row with the ten most frequent words included. A click on a table row selects the corresponding hierarchy node in the upper half of the window. The depicted model in Figure \ref{screenshot} is Topic Grouper's result on the AP Corpus dataset from Section \ref{realworlddata}. \begin{figure} \center{\includegraphics[width=15cm]{screenshot.png}} \caption{Screen Shot of a Simple Tool to Explore the Containment Hierarchy of Topic Models Produced by Topic Grouper} \label{screenshot} \end{figure} To reflect the containment hierarchy of topics, we also created tree diagrams using the mind map tool FreeMind.\footnote{See \url{http://freemind.sourceforge.net}.} Topics are represented as nodes and for reference, they are identified by the number $n$ under which they were generated. Figure \ref{mindmap2} presents a corresponding mind map for the AP Corpus dataset from Section \ref{realworlddata}. All nodes below level six are collapsed in order to deal with limited presentation space. A node contains the five most frequent words of a respective topic. More frequent topics are shaded in blue (as they tend to collect low content words and stop words), whereas less frequent word sets are shaded in red. The contents of the tree may be interpreted is as follows: The root forks into node (4) covering economy and weather as well as node (2) covering other topics and function words. Function words are mainly gathered along the path (1)/(2)/(3)/(6)/(11) and the sub-path (9)/(12)/(23). Node (4) forks into financial topics (14) and topics covering production and weather (17). Node (53) is on weather and potentially different weather regions. Node (46) covers agriculture and water supply whereas node (81) focuses on energy. Regarding node (14), we suspect that stock trading in (30) is separated from general banking and aquisitions in (31). Other topics in the tree seem equally coherent such as ``home and family'' (59), ``public media'' (25), ``jurisdiction and law'' (42), ``military and defense'' (50) and so forth. We find that such interpreted topics often meet the idea of being more general towards the root and more specific towards the leaves. However mixed topics also arise such as topic (21) combining ``drug trafficking'' in (73) with ``military and defense'' in (50). \begin{figure} \center{\includegraphics[width=15cm]{aplarge.pdf}} \caption{Mind Map Diagram as a result of Topic Grouper on the AP Corpus Extract} \label{mindmap2} \end{figure} Table \ref{topiclist} lists topics from $T(40)$ for the AP Corpus dataset. To save presentation space only every second topic in order of frequency is shown: Topics 47 and 69 gather function words and therefore have high frequency. Most topics seem conclusive but obviously, a more objective coherency analysis would be necessary. A corresponding study with human judges may follow the approach in \cite{DBLP:conf/nips/ChangBGWB09} but is beyond the scope of this article. \begin{figure} \begin{center} \small \begin{tabular}{|c|c|ccccccc|}\hline $n$ & $f(t)$ & \multicolumn{7}{c|}{\bfseries Top Seven Words per Topic $t$} \\\hline \csvreader[separator=semicolon,head to column names,late after line=\\\hline]{aplargetopics2nd.csv}{} {\topicid & \topicfr & \worda & \wordb & \wordc & \wordd & \worde & \wordf & \wordg} \end{tabular} \end{center} \caption{Every Second Topic of $T(40)$ Sorted by Topic Frequency for the AP Corpus Dataset} \label{topiclist} \end{figure} \section{Summary and Discussion} \label{sec:discussion} We have presented Topic Grouper as a novel and complementary method in the field of probabilistic topic modeling based on agglomerative clustering: Initial clusters or topics, respectively, each consist of one word from the vocabulary of the training corpus. Clusters are joined on the basis of a simple probabilistic model assuming that each word belongs to exactly one topic. Thus, topics or clusters form a disjunctive partitioning of the vocabulary. After developing a related cluster distance $\Delta h$ we have adapted an existing clustering algorithm, EHAC, in order to compute related cluster trees as models. Dendrogram cuts in the tree serve as flat topic views where a fixed number of topics may be chosen in the range of the vocabulary size $|V|$ and one. The adapted clustering algorithm makes use of the dynamic programming principle leading to a time complexity in $O(|V|^2 \cdot |D|)$ and a space complexity in $O(|V|^2)$, where $|D|$ is the number of training documents. Since memory consumption may be of an issue, we devised an additional algorithm, MEHAC, with an \emph{expected} time complexity in $O(|V|^2 \cdot |D|)$ but space complexity only on the order of $|V|$. Using simple synthetic datasets, where each word belongs to just one original topic, we examined some basic qualities of topic modeling methods: Topic Grouper manages to recover related original topics at low error rate even when their a-priori probabilities are rather unbalanced. pLSI fails under these conditions. LDA is able to recover the original topics but only if its vectorial hyper parameter $\alpha \textbf{m}$ is adjusted accordingly. Regarding various real world datasets, Topic Grouper's predictive performance matched or surpassed LDA with Heuristics at larger topic numbers but was still dominated by LDA Optimized, where only the latter includes an optimization for the LDA-specific hyper parameters $\alpha \textbf{m}$ and $\beta$ but the former applies a commonly used heuristic for them. The results also suggest that Topic Grouper performs the better the less polysemy there is in the vocabulary. This is consistent with Topic Grouper's simplifying topic models. It makes the approach appealing, for instance, for shopping basket analysis where articles stand for themselves: Related models may then aid in forming sales-driven catalog structures or layouts of product assortments since in both cases, a clear-cut to decision on where to place an article is customary. We also investigated Topic Grouper as a means for feature reduction in the field of supervised text classification: The results suggest that it outperforms standard techniques in the field such as Information Gain (IG) and Document Frequency (DF) but is dominated by LDA Optimized. However, LDA incurs a considerable runtime overhead at classification time, where Topic Grouper does not. Also Topic Grouper allows for a dynamic change of the number of topics after training, whereas LDA would require retraining. Based on a corpus of news wire articles (AP Corpus) we showed how a tree model produced by Topic Grouper may be visualized and explored interactively. The presented corpus results exhibit the descriptive qualities of such deep tree models as well as the potential of related drill downs from more general to more specific topics. Alternatively, flat views of an arbitrary number of topics between $|V|$ and one may be derived instantly from the generated model. Although this is a subjective impression, we found corresponding topics to be conclusive and coherent in both tree views and flat topic views. Obviously, this assessment demands a more objective study to follow, potentially in similarity to \cite{DBLP:conf/nips/ChangBGWB09}, \cite{Newman:2010:AET:1857999.1858011} or \cite{lau-newman-baldwin:2014:eacl}. For all text corpora we found that Topic Grouper tends to push stop words and or function words into separate topics. Therefore, it can do without stop word or function word filtering as a preprocessing step. The practical performance of our straight forward implementation ranged between several minutes to several hours for larger datasets of this report and substantiated the theoretical complexity. \emph{A simple and effective means to increase time and space performance is obviously to reduce the vocabulary size $|V|$, e.g. by keeping only a few thousand topmost frequency words from the dataset.} The approach is well in line with the standard practice to focus on high probability words or in case of Topic Grouper, on high frequency words, when displaying and inspecting a topic model's topic-word distributions. In conclusion, we see Topic Grouper as a complementary approach in the tool set of topic modeling methods with a unique mix of pros and cons. The tree-based model also offering flat topic views is an important asset. It allows for deep tree structures to be produced even on small-sized datasets. Another benefit is the method's simplicity and that it requires no configuration or hyper parametrization and no stop word filtering. The fact that each word is in exactly one topic is a considerable limitation and falls short for polysemic words and for words applied in multiple topical contexts. Nevertheless, we found actual topic models for text corpora to be conclusive as reported in Section \ref{viz}. In some cases, a clear-cut decision on where to place words may even be in accordance with an analysts interests---we mentioned examples regarding shopping basket analysis. The results of this paper can all be reproduced via a prototypical Java library named ``TopicGrouperJ'' published on GitHub.\footnote{See \url{https://github.com/pfeiferd/TopicGrouperJ}.} The library features implementations of the corresponding algorithms MEHAC and EHAC. Amongst other things, it contains an LDA Gibbs Sampler with options for hyper parameter optimization and an implementation to compute perplexity as discussed in Section \ref{perplexity}. The code to regenerate any result file of the above-described experiments is also available. \section{Future Work} \label{sec:conclusion} Future research directions may include the \emph{parallelization} of the Topic Grouper algorithms MEHAC and EHAC along with other computational optimizations. Note that the parts of EHAC affecting data structure updates after joining two topics are straight forward to parallelize. An important concern for further work is \emph{model smoothing}, i.e. on how to relax the constraint of each word being in exactly one topic: Regarding flat topic views, we experimented with a combination of Topic Grouper and LDA, where LDA acts as post-processing step. To do so, a flat topic view $T$ from Topic Grouper is used to set the LDA hyper parameter $\beta$ then formed as a matrix in $\Re^{|V| \times |T|}$ where each column corresponds to a designated topic $t \in T$. Higher values for a matrix element in $t$'s column will be given if a corresponding relation $w \in t$ holds. A resulting LDA model $\Theta$ will then be close to the original topics $T$ from Topic Grouper but allows for other words to be included to a certain degree in each distribution $\Theta_t$. Compiling related experimental results is work in progress. Alternatively, topics produced by Topic Grouper may provide useful initialization values for an EM procedure under pLSI. Another line of research may be the early \emph{detection of polysemic words} $w$ in order to address them in a special manner during the clustering process. I.e., if $\Delta h(\{w\}, s)$ and $\Delta h(\{w\}, t)$ according to Equation \ref{eq:deltah} are similar and high, then the topics $s$ and $t$ are both good join candidates for $\{w\}$. This may trigger a special treatment of $w$. We have already mentioned the need to substantiate model quality via \emph{extrinsic evaluation} methods as described to in Section \ref{perplexity}. Our tool from Section \ref{viz} allows for just a basic exploration of learned tree models. A more sophisticated system may include complementary visualization methods and the aforementioned smoothing procedures for flat topic views. Also, navigational links from topics to their underlying documents are to be included. \emph{Tool support for the exploration of document collections} is an ongoing area of research and many solutions have been suggested---several of them exploiting LDA topic models (e.g., \cite{conf/icwsm/ChaneyB12}, \cite{Gretarsson:2012:TVA:2089094.2089099}, \cite{lee-etal:2012:compgraphfor} or \cite{sievert2014ldavis}). Corresponding insights and concepts should be considered and potentially adapted when leveraging results from Topic Grouper. In this context, a particular question is how take advantage of related tree models as opposed to the established use of flat topics. Note that topic trees are demanded by certain clientele: E.g., \cite{brehmer-etal:2014:tvcg} stress their importance when reporting about the Overview system -- a successful document analysis tool developed with a focus on investigative journalism. \cite{Wei-Croft:2006:SIGIR} have employed LDA to \emph{improve document ranking models} for ad-hoc document retrieval. Their approach may be adapted to use models from Topic Grouper instead. The efficiency of determining $t(w)$ and $p(t|d_{test})$ under Topic Grouper may generally be useful to improve retrieval results: E.g., \emph{query expansion} may be performed on the basis of small topics $t$ containing all or most of the entered search terms $w$. In this regard, best matching topics may be chosen from the entire topic tree -- not just a flat view of topics. Finally, topic modeling has been applied to the field of \emph{recommender systems} (e.g. see \cite{Wang:2011:CTM:2020408.2020480, hu-hall-attenberg:2014:kdd}). Consequently, it might be interesting to assess the potential of Topic Grouper for this purpose as it produces even very small topics and may therefore, play a similar role for recommendation as the Apriori method (\cite{Sandvig:2007:RCR:1297231.1297249}).
{ "timestamp": "2019-04-16T02:07:32", "yymm": "1904", "arxiv_id": "1904.06483", "language": "en", "url": "https://arxiv.org/abs/1904.06483" }
\section{Introduction} While the disk accretion paradigm in young stars has tended to focus on the simpler case of single stars, understanding how the process works in binary systems is essential since most stars are multiple \citep[e.g.][and references therein]{2013ARA&A..51..269D}. Studies of young accreting binaries also provide upper bounds on the dynamical effects of companions on disk structure and evolution, and elucidate the potential for planet formation in binary systems. Close (spectroscopic) binary systems are particularly interesting since they are likely surrounded by circumbinary accretion disks during the class I and II phases \citep{2000MNRAS.314...33B}, disks which are largely analogous to those around single stars except within the central few AU. Moreover, the relatively short periods of spectroscopic binaries, typically ranging from days to weeks, enable multi-epoch monitoring campaigns capable of characterizing variability in the accretion and inner disk emission that may trace dynamically-induced effects on the material closest to the stars. The spectroscopic binary T Tauri star DQ Tau has been a target of particular interest. \citet{1997AJ....113.1841M} first characterized its orbital parameters ($P=15.8$ days, $e=0.56$, $a \sim$ 0.14 AU), and also discovered a correlation between orbital phase and optical photometric variability, in which the light curves exhibited sharp increases in brightness at or just before periastron passages. A similar correlation of spectroscopic signatures of accretion activity such as H$\alpha$ emission and continuum veiling was also found by \citet{1997AJ....114..781B}. These results suggested that the accretion flow onto the stars was highly modulated by their orbital motion, repeatedly peaking in intensity when they drew close to each other. Such behavior was predicted by hydrodynamical simulations of circumbinary disk accretion, which showed that torques generated by the binary motion create a low-density cavity in the disk out to a distance of $\sim 2.5a$; accretion then precedes onto the stars via dynamical streams of denser material that are repeatedly torn off the inner edge of the disk near apastron orbital phase and reach a maximum flow of gas near periastron phase, before being disrupted as the stars move further apart \citep{1996ApJ...467L..77A}. This process has come to be known as ``pulsed" accretion. Subsequent simulations of circumbinary disks around both young stars and compact objects, with increasingly sophisticated computational techniques and exploration of parameter space, have shown the same general features, and elucidated the effects of different binary orbital architectures on the strength and periodicity of the accretion \citep{2002A&A...387..550G, 2011MNRAS.413.2679D, 2012ApJ...749..118S, 2015MNRAS.448.3545D, 2016ApJ...827...43M}. Subsequent observations of DQ Tau have further clarified its behavior and revealed more complexity. \citet{2001ApJ...551..454C} detected CO fundamental emission, constraining the location of the emitting gas to be within the putative circumbinary disk hole. \citet{2009ApJ...696L.111B} resolved K-band emission using interferometry, also locating the emission region inside of the expected disk inner edge. Observations of flares at X-ray and millimeter wavelengths \citep{2008A&A...492L..21S, 2010A&A...521A..32S, 2011ApJ...730....6G} found evidence of enhanced activity near periastron, suggestive of magnetic reconnection events induced as the magnetospheres of the two stars interact/collide. \citet[hereafter, Bary14]{2014ApJ...792...64B} recovered the orbital phase dependence of accretion using Pa$\beta$ emission from multi-epoch near infrared spectroscopy; however, they also discovered a surprising increase in accretion luminosity near {\it apastron} phase at one epoch. More recent optical photometric investigations have also uncovered complex light curve behavior, with occasional brightenings not associated with periastron \citep{2017ApJ...835....8T, 2018ApJ...862...44K}, although a correlation with orbital phase remains the dominant feature. The signature of pulsed accretion remains extremely rare. Only one other young close binary system has been found that exhibits unambiguous brightenings associated with periastron passages \citep[TWA3;][]{2017ApJ...842L..12T}. \citet{2007AJ....134..241J} found a periodicity in the optical light curve of UZ Tau E which matched the orbital period, however the flux variations were much slower with low amplitude, and direct accretion probes such as H$\alpha$ showed inconclusive correlations. Three protostellar objects have also been found to exhibit periodic infrared brightenings that look very much like pulsed accretion \citep{2013Natur.493..378M, 2015ApJ...813..107H, 2016ApJ...833..104F}, although no evidence of binarity has yet been published. Other aspects of the expected circumbinary disk structure have been seen in wider binaries where features can be more readily spatially resolved, such as circumbinary rings and streams \citep[e.g., GG Tau;][]{2016A&ARv..24....5D}. In close binaries such as DQ Tau, these regions of the system can be probed with multi-epoch infrared observations, particularly at wavelengths $\ge 2 \mu$m where warm dust emission becomes significant. Infrared photometry in particular has been lacking, however; we thus initiated a multi-year campaign to obtain near-infrared (NIR) photometric monitoring, with simultaneous optical observations, spanning multiple orbital cycles. In this paper, we present the results of the first two seasons of photometry, along with contemporaneous 0.8-5 $\mu$m spectroscopy at a more limited set of epochs. In section 2, we describe the observations and data reduction. Section 3 details the photometric analysis of light curves, color-color comparisons, and periodicities and timing, as well as the spectroscopic analysis of spectral typing, veiling measurements, and derivation of excess spectra. Finally, we discuss our results in the context of the pulsed accretion model in section 4. For the purposes of our various analyses, we adopted the most recent and robust determinations of the DQ Tau system parameters from the literature, as listed in Table~\ref{params}. This includes updating the distance from the canonical Taurus region value of 140 pc to the specific value of 196 pc recently measured by Gaia \citep{2018AJ....156...58B}. Despite the possibility of systematic errors due to the binarity of the system, we believe this new larger value is likely correct; DQ Tau is part of a small subgroup of five stars that lie within the L1558 cloud southeast of the main concentration of stars in the Taurus star forming region; their median Gaia distance is 196 pc \citep{2018AJ....156..271L}. \begin{deluxetable}{lcc} \tablewidth{0pt} \tablecaption{Literature properties of DQ Tau} \tablehead{ \colhead{property} & \colhead{adopted value} & \colhead{reference}} \startdata $P$ (days) & 15.80158 & \citet{2016ApJ...818..156C}\\ $e$ & 0.568 & \citet{2016ApJ...818..156C}\\ $T_{peri}$ (HJD-2,400,000) & 47433.507 & \citet{2016ApJ...818..156C}\\ d (pc) & 196 & \citet{2018AJ....156...58B}\tablenotemark{a}\\ $T_{eff}$ (K) & 3700 & \citet{2016ApJ...818..156C}\\ $i$ (deg) & 158 & \citet{2016ApJ...818..156C}\\ $M_1 + M_2$ ($M_{\odot}$) & 1.21 & \citet{2016ApJ...818..156C}\tablenotemark{b}\\ $L_1 + L_2$ ($L_{\odot}$) & 0.64 & \citet{2016ApJ...818..156C}\tablenotemark{c}\\ $a$ (AU) & 0.13 & \citet{2016ApJ...818..156C}\\ $R_{co}$ (AU) & 0.034 & \citet{2018ApJ...862...44K}\tablenotemark{d}\\ \enddata \tablenotetext{a}{Based on the Gaia DR2 catalog.} \tablenotetext{b}{Estimated by Czekala et al. assuming a distance of 140 pc.} \tablenotetext{c}{Scaled up by a factor of two to account for the Gaia distance of 196 pc.} \tablenotetext{d}{Corotation radius calculated given the rotation period measured by K\'osp\'al et al. (3.017 days), and the stellar mass (not adjusted for the difference in distance, and assuming both stars are identical).} \label{params} \end{deluxetable} \section{Observations} \subsection{SMARTS Photometry} We observed DQ Tau with the ANDICAM instrument on the CTIO 1.3m telescope, operated by the SMARTS Consortium. ANDICAM is a dual-channel imager that enables simultaneous observations at two band passes in the optical and near-infrared. The optical channel has a field of view is $\sim 6$' x 6' and detector pixel scale of $\sim 0.3$ arcsec, while the NIR channel has a field of view of $\sim 2.4$' x 2.4' and detector pixel scale of $\sim 0.2$ arcsec. We used $BVI$ filters (standard KPNO Johnson-Cousins) with the optical channel and $JHK$ filters (CIT/CTIO) with the infrared channel. Table~\ref{obs} summarizes the exposure times and observation date ranges. The NIR channel has an internal tip-tilt mirror to enable small-scale dithering; six exposures were taken at each filter, with each exposure separated by a small dither offset of $\sim 20$ arcseconds. The observations were obtained over two seasons from fall 2012 to winter 2014; each season had almost continuous nightly coverage (excluding bad weather or scheduling pre-emptions) over two- to four-month periods, during which time the star was observable below 2 airmasses. The total sequence of exposures taken with 3 pairs of filters each night took about 10 minutes to execute. \begin{deluxetable}{lccccccc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{SMARTS DQ Tau observation summary} \tablehead{ \colhead{date range} & \multicolumn{6}{c}{exposures\tablenotemark{a}} & \colhead{N$_{obs}$}\\ \colhead{} & \colhead{B} & \colhead{V} & \colhead{I} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{}} \startdata Nov. 15 2012 - Jan. 30 2013 & 3x30 & 3x15 & 3x10 & 6x12 & 6x6 & 6x7 & 62\\ Sep. 9 2013 - Feb. 3 2014 & 3x30 & 3x15 & 3x15 & 6x15 & 6x9 & 6x9 & 103 \enddata \tablenotetext{a}{Number of exposures and exposure time in seconds, per filter and observation.} \label{obs} \end{deluxetable} The optical data are automatically processed by the SMARTS pipeline, including bias and zero subtraction and flat field correction. We measured stellar photometry using the resulting archived products, as described below. The NIR images are not automatically processed, other than having a 2x2 pixel binning applied. For each set of dithered exposures at each filter, we created a median sky image and subtracted from each exposure. We then divided the sky-subtracted images by flat fields constructed from dome flat exposures at each filter. We measured stellar photometry using the archived pipeline products in the optical and the sky-subtracted, flat fielded images in the NIR. This was done with aperture photometry, using an aperture radius of 20/15 pixels and sky annulus of 25-32/30-35 pixels in the optical/NIR. Suitably bright comparison stars within the field provided relative photometry, calibrating out variable nightly weather conditions; three were used in the optical images and one in the NIR. Most of these comparison stars are fainter than DQ Tau at all bands, and thus are the limiting factor in the final photometric uncertainties in most cases. In poor weather conditions, the NIR sky background was variable on timescales of minutes or less, which sometimes contributed the largest source of photometric error at the $JHK$ bands. Relative magnitudes were computed for each exposure in each band, and the final results averaged over all exposures (and all comparison stars in the optical) for a given band on each night, including one iteration of outlier rejection, with the standard deviation of these values taken as the uncertainty. By cross-checking the optical comparison stars, we demonstrated excellent repeatability (and ruled out any significant intrinsic variability) with an overall precision of $\sim 0.04$, 0.02, and 0.015 magnitudes at $B$, $V$, and $I$. The relative precision of the NIR photometry, as estimated by comparing the primary comparison star with a fainter third star in the field, is $\sim 0.01$, 0.02, and 0.04 magnitudes at $J$, $H$, and $K$ (the fainter star is bluer, so these relative measurements are more uncertain at longer wavelengths). To convert the optical photometry to the Johnson-Cousins system, we calibrated the three comparison stars using contemporaneous standard star observations taken during the 2012 season on ostensibly photometric nights. A total of 35 observations of the Landolt standard TPhe D were taken on the same nights as our DQ Tau observations. The zero points and color terms for these are given on the SMARTS consortium website, and we used them to convert the instrumental magnitudes to the standard system. Table~\ref{compstars} gives the resultant magnitudes. The extinction coefficients are the largest source of uncertainty in this conversion because of the limited number of measurements as a function of airmass; we used the ``default" values provided on the SMARTS website. We conservatively estimate overall absolute uncertainties of about 0.2 magnitudes in each optical band. $B$ and $V$ photometry for all three stars are provided in the UCAC4 catalog \citep{2013AJ....145...44Z}, and are within 0.1 magnitudes of our values. \citet{2010A&A...521A..32S} also derived photometry for all three stars at $V$ and $I$ that agree to within 0.2 magnitudes. In the NIR, we used photometry of the primary comparison star from 2MASS to convert the relative magnitudes of DQ Tau directly to the CIT system \citep{2001AJ....121.2851C}; the absolute accuracy is then limited by the uncertainties of the 2MASS measurements and the transformations between the 2MASS and CIT systems (combined, about 0.05 magnitudes in each band). Table~\ref{photometry} shows a truncated set of the final calibrated photometry (the full version will be made available online). \begin{deluxetable}{lccrrrrrr} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{Comparison star photometry} \tablehead{ \colhead{ID} & \colhead{RA\tablenotemark{a}} & \colhead{DEC\tablenotemark{a}} & \colhead{B} & \colhead{V} & \colhead{I} & \colhead{J\tablenotemark{a}} & \colhead{H\tablenotemark{a}} & \colhead{K\tablenotemark{a}}} \startdata 1\tablenotemark{b} & 04:46:40.79 & +16:57:50.4 & 15.19 & 13.56 & 11.66 & 10.27 & 9.55 & 9.32\\ 2\tablenotemark{b} & 04:46:39.64 & +17:00:04.3 & 15.84 & 14.38 & 12.46 & 10.98 & 10.43 & 10.16\\ 3 & 04:46:46.15 & +17:00:29.1 & 16.02 & 14.79 & 13.22 & 12.02 & 11.51 & 11.33 \enddata \tablenotetext{a}{Coordinates and magnitudes from 2MASS.} \tablenotetext{b}{Not in ANDICAM NIR field of view.} \label{compstars} \end{deluxetable} \begin{deluxetable}{lcccccccccccc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{SMARTS/ANDICAM photometry for DQ Tau} \tablehead{ \colhead{JD -2450000} & \colhead{B} & \colhead{$\sigma_B$} & \colhead{V} & \colhead{$\sigma_V$} & \colhead{I} & \colhead{$\sigma_I$} & \colhead{J} & \colhead{$\sigma_J$} & \colhead{H} & \colhead{$\sigma_H$} & \colhead{K} & \colhead{$\sigma_K$}} \startdata 6246.72 & 15.0582 & 0.0120 & 13.4431 & 0.0160 & 11.1853 & 0.0256 & 9.4064 & 0.0074 & 8.4451 & 0.0031 & 7.9183 & 0.0085 \\ 6247.65 & 15.0151 & 0.0242 & 13.4501 & 0.0104 & 11.1837 & 0.0094 & 9.4013 & 0.0083 & 8.4310 & 0.0122 & 7.8902 & 0.0004 \\ 6248.63 & 14.6287 & 0.0201 & 13.1966 & 0.0120 & 11.0268 & 0.0112 & 9.3191 & 0.0017 & 8.3424 & 0.0267 & 7.7732 & 0.0135 \\ 6249.66 & 13.6157 & 0.0153 & 12.4896 & 0.0093 & 10.5984 & 0.0152 & 9.0337 & 0.0179 & 8.0772 & 0.0062 & 7.4715 & 0.0087 \\ 6250.63 & 14.4273 & 0.0237 & 13.1242 & 0.0165 & 10.9752 & 0.0241 & 9.1916 & 0.0145 & 8.2149 & 0.0033 & 7.6565 & 0.0192 \\ 6251.65 & 14.6657 & 0.0189 & 13.2381 & 0.0104 & 11.0688 & 0.0107 & 9.3150 & 0.0067 & 8.3561 & 0.0066 & 7.8098 & 0.0078 \\ 6252.62 & 14.8033 & 0.0331 & 13.2841 & 0.0188 & 11.0839 & 0.0126 & 9.3405 & 0.0126 & 8.3890 & 0.0088 & 7.8629 & 0.0104 \\ 6253.67 & 15.1059 & 0.0312 & 13.4964 & 0.0148 & 11.2090 & 0.0099 & 9.4460 & 0.0035 & 8.4848 & 0.0032 & 7.9601 & 0.0048 \\ 6254.64 & 15.0925 & 0.0174 & 13.4748 & 0.0049 & 11.1824 & 0.0127 & 9.4246 & 0.0050 & 8.4928 & 0.0047 & 7.9534 & 0.0133 \\ 6255.68 & 15.0436 & 0.0370 & 13.4693 & 0.0187 & 11.1732 & 0.0074 & 9.4282 & 0.0092 & 8.4614 & 0.0035 & 7.9331 & 0.0101 \enddata \tablecomments{The quoted uncertainties include only random measurement errors. Values of -999 indicate missing data.} \label{photometry} \end{deluxetable} \subsection{SpeX Spectroscopy} We observed DQ Tau with the SpeX spectrograph \citep{2003PASP..115..362R} at IRTF on December 13, 22, and 30 2012 and January 5 and 8 2013. All observations used both SXD and LXD modes, for a combined wavelength coverage of 0.8 to 5 $\mu$m, and a slit width of 0.8", for a spectral resolution of $\sim 1000$. The position angle was adjusted to keep the slit at the parallactic angle during each observation in order to minimize slit losses from atmospheric refraction. Total exposure times were typically 6 to 8 minutes for SXD mode and 30 to 40 minutes for LXD mode, all split into multiple nods in the typical ABBA pattern for background subtraction. The data were reduced and spectra extracted using the Spextool package \citep{2004PASP..116..362C}, which includes routines for sky subtraction, flat fielding, tracing and extraction of each spectral order, telluric correction, and order matching and combination. The telluric correction step was done using spectra of A0 stars observed near in time and airmass to each DQ Tau observation, combining with a model spectrum of Vega in order to correct for photospheric absorption lines \citep{2003PASP..115..389V}. The final spectra have a very accurate spectral shape, although the absolute flux level (estimated by extrapolating from the optical brightness of the telluric standard) can have larger errors depending on the weather conditions (Fig.~\ref{compspec}). \begin{figure}[H] \includegraphics[scale=0.8]{dqtau_nircomp.pdf} \caption{Comparison of the NIR portion of the SpeX spectra (solid lines) for all epochs with the ANDICAM photometric measurements nearest in time (error bars; the horizontal portion represents the ANDICAM bandpasses). For clarity, each set of data after the 12/13/2012 set have been shifted upward by the following amounts in order of increasing time: 3, 7, 11, 16. Solid triangles indicate the spectral flux convolved by the appropriate ANDICAM bandpass at the J, H, and K bands. The absolute flux levels of the spectra differ from the ANDICAM photometry by $\sim15$\% or less, and the variations between bandpasses at each epoch are within $\sim3$\%. \label{compspec}} \end{figure} \section{Results} \subsection{Photometric Behavior} The time series photometry for the 2012-2013 and 2013-2014 seasons are shown in Figures~\ref{smphot12} and~\ref{smphot13}, respectively. Repeated flux increases (hereafter referred to as ``pulses") above a flat or slowly varying baseline level are readily apparent at all bands in both the optical and NIR. Pulse events occur within most of the binary orbits covered by our data, and usually (but not always) peak at or just before the time of periastron passage. The pulses are typically sharply peaked and have durations ranging from $\sim$2 to 6 days. Of the 14 binary orbits fully covered over both seasons, only two appear to lack obvious pulse events in the optical (the ones with periastron passages near JD 2456315 and 2456600); however, because of gaps in coverage due to bad weather, we cannot rule out the presence of very short pulses with durations of 1-2 days in these cases. In the NIR, only one orbit shows no related pulse (JD 2456315), although the timing of the increase at the very end of the 2012-2013 monitoring leads to ambiguity as to whether it was associated with the preceding orbit or the next (mostly uncovered) one. The timing of the pulse peaks in many cases appears to be wavelength-dependent, with the peaks at $BVIJ$ typically occurring at the same time, while some peaks at $H$ and $K$ occur up to 1 to 2 days beforehand. The amplitude of the optical peaks are strongly variable, with increases above the baseline at $B$ mostly ranging from $\sim 0.5$ to 2 magnitudes, and up to 3.5 magnitudes in one extraordinary case (near JD 2456630). In the NIR, the amplitudes are smaller and somewhat more regular, with most pulses at about 0.5 magnitudes at $K$. \begin{figure}[H] \plotone{dqtau_2012_all.pdf} \caption{SMARTS/ANDICAM BVIJHK light curves of DQ Tau, in relative magnitudes, for the 2012-2013 season. The epochs of periastron passage are marked with dotted black lines. For clarity, each band has been median-subtracted and offset by an arbitrary amount. \label{smphot12}} \end{figure} \begin{figure}[H] \plotone{dqtau_2013_all.pdf} \caption{Same as in Figure~\ref{smphot12}, for the 2013-2014 season. \label{smphot13}} \end{figure} The pulse heights and, to a lesser extent, durations, are wavelength-dependent. Figures~\ref{smcc12} and~\ref{smcc13} show optical and NIR color-color plots for both observing seasons. In the optical, the flux level is strongly correlated with $B-V$ color, with brighter epochs (i.e., the pulses) being bluer. Plotted as a function of time, the pulse correlation can be seen more clearly (Figs.~\ref{smct12},~\ref{smct13}). The $B-V$ baseline between the pulses is relatively flat, with the average color being consistent with a reddened M0 photosphere with A$_V \gtrsim 1$; there is likely some residual continuum excess at least at B and V bands, given that previous measures of optical veiling rarely if ever decreased to zero (e.g., Basri et al. 1997), and the fact that small color variations do occur. In the NIR, the brighter epochs correspond to {\it redder} colors. In general, the NIR observations form a locus of points that are roughly parallel to the CTTS locus (as defined by Meyer et al. 1997). Dereddening these points down to the locus (assuming that the separation is entirely due to extinction, which may not be the case) yields a typical extinction value of A$_V \sim 1.5$. There are a few interesting outliers in the 2013-2014 season, which we describe in more detail below. The $H-K$ color time series (lower panels of Figs.~\ref{smct12} and~\ref{smct13}) exhibit peaks corresponding to each pulse event; most of these peaks are broader in time, with an earlier start time in terms of binary phase (typically at or near apastron phase), compared to the peaks seen in the optical. They also typically reach a maximum earlier than the optical color peaks, consistent with the behavior in the corresponding photometric bands as described above. In addition to the pulse events corresponding to each orbital period, the NIR photometry also exhibits a longer-term trend at H and K bands with characteristic timescales of a few months. The trend is most apparent in the H-K color time series, exhibiting an amplitude of $\sim 0.15$ magnitudes. \begin{figure}[H] \includegraphics[scale=0.9]{dqtau_2012_colors.pdf} \caption{SMARTS/ANDICAM optical and NIR color-color diagrams for the 2012-2013 season. The symbol color scheme scales with B (upper panel) and K (lower panel) magnitudes, where indigo is the brightest observation and dark red is the faintest. The dwarf star color sequences \citep[using colors from][]{1995ApJS..101..117K} are shown with the black solid lines; the arrows depict reddening vectors for an M0 photosphere with $A_V=1$. The CTTS locus \citep{1997AJ....114..288M} is also shown in the lower panel, with a reddening vector at its blue end. \label{smcc12}} \end{figure} \begin{figure}[H] \includegraphics[scale=0.9]{dqtau_2013_colors.pdf} \caption{Same as in Figure~\ref{smcc12}, for the 2013-2014 season. \label{smcc13}} \end{figure} \begin{figure}[H] \plotone{dqtau_2012_colors_time.pdf} \caption{SMARTS/ANDICAM {\it B-V} and {\it H-K} colors versus time for the 2012-2013 season. \label{smct12}} \end{figure} \begin{figure}[H] \plotone{dqtau_2013_colors_time.pdf} \caption{Same as in Figure~\ref{smct12}, for the 2013-2014 season. \label{smct13}} \end{figure} In order to quantify the apparent lags between bands, we computed discrete correlation functions \citep[DCFs;][]{1988ApJ...333..646E} for the combined 2012-2013 and 2013-2014 photometry. This method is designed for unevenly-sampled data, as is the case here (we also looked at cross-correlations of the light curves interpolated onto a regular time spacing and obtained similar results). Figure~\ref{dcfs} shows the DCFs of V and K bands and the H-K color relative to the B band. The V band is well-correlated with B and shows no measurable lag. K band is also correlated, though at a somewhat lower level of significance. Its DCF is clearly offset from zero lag, with maxima at zero and 2-day lags and a "centroid" value at about a 1-day lag, and is also broader than for V band. This may be a further indication of the longer apparent duration of the K band flux peaks on average compared to the optical pulse events, or alternatively may be explained by two separate events in the light curves, one with zero lag associated with the optical events and one with lag $\sim 2$ days. The H-K color DCF exhibits a weak peak at a lag of 3 days, however the significance is not as robust given the smaller amplitude of the H-K peaks. \begin{figure}[H] \includegraphics[scale=0.6]{dqtau_dcf.pdf} \caption{Discrete correlation functions for the indicated bands compared to the B band. The photometry from both 2012-2013 and 2013-2014 seasons were included. \label{dcfs}} \end{figure} Periodograms of the combined data from both seasons are shown in Figure~\ref{pds}. Results for the B band photometry show a weak peak at 15.77 $\pm 0.41$ days (error estimated from the FWHM of the periodogram peak), with false alarm probability (FAP) of 0.65, indicating only a weak statistical correlation of the blue pulses with the binary orbital period. The K band periodogram also shows a peak at 15.80 $\pm 0.42$ days, with FAP $\sim 2 \times 10^{-8}$, a much stronger statistical correlation that reflects the more steady nature of the NIR pulses; there is also a second weaker but possibly significant peak at 74.1 days, which corresponds to the long-term trend mentioned above. The H-K color time series periodogram shows a strong peak at a similar period of 75.4 $\pm 10.9$ days, with FAP $\sim 7 \times 10^{-8}$, and a peak at 15.77 days is still present but with lower significance. Given that our data cover only about three full periods of the 75-day feature, confirmation of a persistent periodic feature in the NIR light curves requires further long-term monitoring. The 15.8-day periodicity exactly matches the binary orbital period as measured from radial velocity observations. There is also a higher-frequency low-amplitude variation seen during several quiescent cycles in both seasons, with amplitude decreasing to longer wavlengths. Periodograms for those parts of the V-band light curves (in the intervals JD 2456300-6325 and 2456585-6615) show a peak at 3.0 $\pm 0.3$ days (with significance values of 0.05 and 0.09, respectively). This is similar to the estimate of the stellar rotation period of $\sim 3$ days from Basri et al. (1997) based on measurements of $v$sin$i$, and is in agreement with the period recently found from Kepler K2 observations (3.017 days; K\'osp\'al et al. 2018). This feature is likely the signature of a rotating hot- or cold-spot on the surface of one or both of the stars, a signature that is typically obscured by the larger-amplitude variations likely driven by accretion. \begin{figure}[H] \includegraphics[scale=0.6]{dqtau_2012-13_periodograms.pdf} \caption{Lomb normalized periodograms for the B, K, and {\it H-K} color time series for the combined 2012-2013 and 2013-2014 seasons. Note that each of the major peaks are accompanied by several sidelobes, which are artifacts caused by the large gap between the two seasons. \label{pds}} \end{figure} Several orbital cycles monitored in the 2013-2014 season presented particularly noteworthy behavior, some of which contradicts the general photometric trends. Figure~\ref{2013zoom} shows the multiband photometry and colors for this interval, which spans about 2.5 cycles. The most obvious feature is the extremely large pulse near JD 2456630, as previously mentioned. To our knowledge, this is by far the largest amplitude optical brightening ever observed in DQ Tau, with a maximum increase of about 3.4 magnitudes at $B$ and just under 1 magnitude at $K$. The pulse peak is extremely blue in the optical, consistent with emission from the accretion shock, which is so strong that even the $J-H$ color is significantly bluer than at any other epoch (dropping below the CTTS locus). By contrast, the peak $H-K$ colors are similar to those of other pulse peaks. Immediately after this strong pulse, the IR emission decreases significantly, with the $H-K$ color exhibiting a particularly dramatic drop to one of the lowest levels we see at any epoch, though it recovers quickly in just a few days. The subsequent two orbital cycles show very different behavior. No obvious optical pulse appears before the next periastron passage, although there is a peak in the NIR. After that, the light curves become much more complicated; there are {\it four} peaks in the optical separated by roughly four to five days, and three peaks in the $H-K$ color curve (not coincident with the optical peaks). \begin{figure} \plotone{dqtau_2013_zoom.pdf} \caption{A zoomed-in portion of the light curve from the 2013-2014 season, along with various associated colors. The symbols are color-coded according to time to facilitate matching between panels. Dashed lines indicate times of periastron passages. Arrows in the color-color plots indicate reddening vectors for $A_V = 1$. The solid line in the $H-K$ vs $J-H$ color-color plot is the CTTS locus. \label{2013zoom}} \end{figure} \subsection{Spectroscopy} The full set of spectra are shown in Figure~\ref{spec}. The data quality is uniformly high, with $S/N > 50$ over the entire wavelength range except for the edges of some of the telluric absorption bands and parts of the 4.6-5 $\mu$m order. In most of the 1-2.5 $\mu$m range, the $S/N > 300$. Telluric absorption residuals are seen in some epochs at the edges of the telluric windows and at 2.8-3.4 and 4.6-5 $\mu$m, typically the result of an imperfect match of the telluric template air mass. The spectra show absorption features typical of young early M-type stars, as well as a wide range of emission lines whose strengths vary considerably with epoch. Our primary objective with these data is to derive spectra of the dust excess emission in order to characterize its strength and shape. This requires matching with a proper photospheric template and correcting for extinction and veiling, a process which we describe in the next two subsections. \begin{figure}[H] \includegraphics[scale=0.8]{dqtau_spec.pdf} \caption{IRTF/SpeX spectra obtained on the indicated nights. Each spectrum has been scaled by the average relative offset from the contemporaneous SMARTS photometry. \label{spec}} \end{figure} \subsubsection{Spectral type and template matching} The spectral type of DQ Tau as estimated from optical spectroscopy ranges from about K5 \citep{1997AJ....114..781B} to M0.6 \citep{2014ApJ...786...97H}. However, the SpeX data exhibit many features, in particular broad molecular absorption bands such as TiO and H$_2$O, that are more consistent with a later type. This was also shown by Bary14, who suggested that the discrepancy could be explained by including the effects of large cool spots. One difficulty with deriving accurate spectral types in the infrared is that many photospheric features are sensitive to surface gravity; the usual practice of adopting main sequence dwarfs to calibrate absorption line strengths can then have significant errors depending on which features are being compared \citep[e.g.,][hereafter, McClure13]{2013ApJ...769...73M}. To try to mitigate surface gravity effects, we observed several weak line T Tauri stars (WTTSs) with a range of spectral types to use as photospheric templates (Table~\ref{templatetab}). Figure~\ref{templates} compares the 0.8-1 $\mu$m range of one DQ Tau epoch (which had a low level of accretion activity, and hence veiling) to two of our WTTS templates, as well as several dwarf standards taken from the SpeX spectral library \citep{2005ApJ...623.1115C, 2009ApJS..185..289R}. The closest match by eye is the WTTS LkCa 21, which has an optical spectral type of M2.5 \citep{2014ApJ...786...97H}. For a more rigorous comparison, we measured absorption line equivalent widths and line ratios, restricted to lines at shorter wavelengths and/or closely spaced in wavelength in order to mitigate the effects of veiling. These specific indicators were found by McClure13 to offer the most accurate measures of spectral type. The resulting measurements for the WTTS templates and two epochs of DQ Tau (in which the veiling was lowest) are shown in Figure~\ref{abs_ews}. We also measured the same features in the IRTF spectral library dwarf spectra, degrading the resolution in order to match our observations taken with a wider slit. In most cases, DQ Tau and the WTTSs follow the trend with spectral type indicated by the dwarf measurements. The M3.6 WTTS LkCa 1 has systematically weaker absorption for all but one of the four lines plotted, although the two line ratios are consistent with its optical spectral type; this may be a surface gravity effect (note the luminosity derived by Herczeg \& Hillenbrand is fairly high for its spectral type, suggesting a larger radius and lower surface gravity). Overall, the measurements for DQ Tau suggest a spectral type in the range M0-M1, in excellent agreement with the most recent optically-derived type of M0.6 from \citet{2014ApJ...786...97H}. Again, of the three WTTS templates the closest match is LkCa 21. Given the reasonable similarity in absorption line equivalent widths as well as the excellent match to the broader TiO and H$_2$O absorption bands, we adopted LkCa 21 as the photospheric template in all further analyses of the dust excess emission in DQ Tau. \begin{deluxetable}{lccc} \tablewidth{0pt} \tablecaption{WTTS templates} \tablehead{ \colhead{Object} & \colhead{spectral type} & \colhead{$A_V$} & \colhead{log L ($L_{\odot}$)}} \startdata LkCa 14 & K5.0 & 0.0 & -0.15\\ LkCa 21 & M2.5 & 0.3 & -0.37\\ LkCa 1 & M3.6 & 0.45 & -0.29 \enddata \tablecomments{All measurements from \citet{2014ApJ...786...97H}.} \label{templatetab} \end{deluxetable} \begin{figure}[H] \includegraphics[scale=0.8]{dqtau_template_comp} \caption{Comparison of one of the DQ Tau SpeX spectra (with negligible veiling in the displayed wavelength range) to several template spectra with a range of spectral types. Fluxes have been arbitrarily shifted to place each spectrum in order of type. Spectra labeled only with a spectral type are for dwarf stars taken from the SpeX spectral library. \label{templates}} \end{figure} \begin{figure}[H] \plotone{dqtau_ews} \caption{Selected absorption line equivalent widths and line ratios as a function of spectral type for two epochs of DQ Tau (red and blue triangles), the WTTS templates (black squares), and dwarf spectra from the IRTF spectral library (error bars). The plotted spectral types for DQ Tau and the WTTSs were taken from \citet{2014ApJ...786...97H}, based on optical spectra. The formal measurement uncertainties for the TTSs are equal to or less than the symbol sizes. \label{abs_ews}} \end{figure} \subsubsection{Derivation of excess spectra} The photospheric absorption lines in our spectra of DQ Tau are subject to continuum veiling, particularly at wavelengths $\gtrsim 1.5 \mu$m. This excess emission originates largely from hot dust in the innermost regions of the system, as indicated by our NIR photometry. By characterizing the excess as a function of wavelength at multiple epochs, we can infer general properties such as temperature, location, and size of the emitting region, and how these vary with time. To measure the veiling, we compared depths of selected absorption lines in each spectrum of DQ Tau with those of the template, LkCa 21. There are several methods that have been devised originally for optical measurements \citep[e.g.][]{1989ApJS...70..899H} that are equally applicable to the NIR (McClure13). We adopted the equivalent width method, whereby the veiling is measured for individual absorption lines using the ratio of equivalent widths between the object and template spectra: $EW_{temp}/EW_{obj} = 1 + r_\lambda$. As noted by McClure13, this technique has the advantage of being able to ignore lines that may be sensitive to differences in surface gravity or cool spot surface coverage between the object and stellar template. The measured line equivalent widths for all DQ Tau epochs and two WTTSs are given in Table~\ref{abs_ew}. We used only atomic lines at wavelengths shorter than $\sim 2.5 \mu$m; molecular features such as the CO overtone lines around 2.3 $\mu$m are more sensitive to gravity and spot effects, and any lines at longer wavelengths are filled in by the veiling. \begin{deluxetable}{lccccccc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{Absorption line equivalent widths} \tablehead{ \colhead{line} & \colhead{} & \colhead{} & \colhead{DQ Tau} & \colhead{} & \colhead{} & \colhead{LkCa 21} & \colhead{LkCa 14} \\ \colhead{} & \colhead{12/13/12} & \colhead{12/22/12} & \colhead{12/30/12} & \colhead{1/5/13} & \colhead{1/8/13} & \colhead{} & \colhead{}} \startdata 0.81980 & 1.20 & 1.10 & 1.20 & 0.97 & 1.20 & 1.50 & 1.00\\ 0.83840 & 1.20 & 0.92 & 1.40 & 0.77 & 1.30 & 1.50 & 1.00\\ 0.88070 & 1.00 & 0.65 & 0.99 & 0.67 & 0.89 & 0.99 & 1.10\\ 0.90900 & 0.61 & 0.40 & 0.60 & 0.40 & 0.57 & 0.60 & 0.78\\ 0.97880 & 0.95 & 0.70 & 0.92 & 0.72 & 0.81 & 0.94 & 0.76\\ 1.03440 & 1.10 & 0.95 & 1.10 & 0.85 & 1.00 & 1.20 & 0.69\\ 1.14040 & 1.40 & 1.10 & 1.60 & 1.00 & 1.40 & 1.60 & 0.97\\ 1.16890 & 0.81 & 0.71 & 0.81 & 0.61 & 0.79 & 0.79 & 0.56\\ 1.18300 & 0.99 & 0.80 & 0.99 & 0.71 & 1.00 & 0.99 & 1.60\\ 1.2087 & 0.55 & 0.42 & 0.51 & 0.36 & 0.55 & 0.40 & 0.71\\ 1.25250 & 0.80 & 0.68 & 0.85 & 0.58 & 0.88 & 0.85 & 0.50\\ 1.31270 & 1.30 & 1.20 & 1.40 & 1.10 & 1.30 & 1.40 & 1.40\\ 1.31500 & 1.00 & 0.90 & 1.10 & 0.84 & 0.97 & 1.10 & 1.10\\ 1.48818 & 1.80 & 1.50 & 1.90 & 1.30 & 1.80 & 1.90 & 2.70\\ 1.50270 & 2.10 & 1.80 & 2.20 & 1.70 & 2.00 & 2.30 & 3.00\\ 1.57710 & 2.00 & 1.90 & 2.10 & 1.70 & 2.00 & 2.30 & 3.20\\ 1.58900 & 2.00 & 1.80 & 2.30 & 1.30 & 2.20 & 2.20 & 3.40\\ 1.61550 & 0.94 & 0.82 & 0.99 & 0.81 & 0.93 & 1.10 & 1.60\\ 1.62590 & 1.00 & 0.93 & 1.10 & 0.87 & 1.00 & 1.40 & 1.30\\ 1.6719 & 1.60 & 1.80 & 1.80 & 1.30 & 1.60 & 2.20 & 1.70\\ 1.67550 & 1.90 & 2.10 & 2.10 & 1.70 & 2.10 & 2.50 & 2.30\\ 1.71130 & 2.60 & 2.60 & 2.80 & 2.20 & 2.80 & 3.50 & 3.20\\ 2.11650 & 0.82 & 0.89 & 1.10 & 0.76 & 1.10 & 1.60 & 1.30\\ 2.26000 & 1.90 & 1.50 & 2.10 & 1.40 & 1.90 & 3.20 & 2.70 \enddata \tablecomments{First column is line wavelength in units of microns. Equivalent widths are in units of {\AA}.} \label{abs_ew} \end{deluxetable} In order to derive the excess emission spectrum at each epoch, the observed spectrum must be correctly matched with that of the template. We followed a procedure similar to \citet{1998ApJ...492..323G}, originally applied to UV/optical data and subsequently adapted to NIR data by \citet{2011ApJ...730...73F} and McClure13. This method allows a simultaneous derivation of the reddening and normalization factor given the measured veiling. In short, the ratio of the continum fluxes of the object ($F_{obj}$) and the template ($F_{temp}$), modified by the object veiling, is related to the difference in extinction between the two by \begin{equation} 2.5 \log \left[ \frac{F_{\lambda,temp}}{F_{\lambda,obj}} (1+r_{\lambda}) \right] = 2.5 \log \, C \, \frac{A_{\lambda}}{A_V} \left( A_{V,obj} - A_{V,temp} \right) , \end{equation} where $C$ is the normalization factor between object and template, and $A_{\lambda}/A_V$ is the extinction law. By measuring the flux and veiling at a number of lines across the spectrum, the left-hand side of equation 1 can be evaluated. A linear fit between those values and an assumed extinction law then yields the difference in extinction $A_{V,obj}-A_{V,temp}$ (from the slope of the fit), and the normalization constant $C$ (from the y-intercept). We adopted the extinction law from \citet{1990ARA&A..28...37M} for $\lambda < 3 \mu$m, and \citet{2007ApJ...663.1069F} for longer wavelengths. Once the extinction and normalization factor have been determined, the object spectrum can be dereddened, and the template can be scaled and subtracted to remove the photospheric component. The results of these calculations for each observation epoch are shown in Figures~\ref{veiling1}-~\ref{veiling5}. Each upper panel plots the evaluation of equation 1 for each of the absorption lines indicated in Table~\ref{abs_ew}, along with a linear fit to the data points. Some of the lines selected for this analysis are consistently poor matches to the template. In particular, the lines at 0.8198 and 0.8384 $\mu$m, and most of the lines at wavelengths $> 1.6 \mu$m, are significantly stronger in the template spectrum than in any of the DQ Tau spectra, and thus give systematically larger veiling values. We believe this is due to a mismatch in the gravity and/or the fraction of the stellar surface covered by cool spots between object and template, and elected to ignore these lines when performing the linear fits. The middle panels show the measured veiling values for each line, along with a veiling spectrum calculated by dividing the excess spectrum by the template. There is good agreement between the individual veiling values and the calculated spectrum for all except the aforementioned discrepant lines, which indicates self-consistency between the derived extinction and normalization constant. Finally, the lower panels show the resulting dereddened and scaled object spectrum compared to the original observed spectrum and the template. \begin{figure}[H] \epsscale{0.8} \plotone{dqtau_dec13_veiling.pdf} \caption{Derived quantities and spectra from the 12/13/2012 observation of DQ Tau. (Top) Evaluated expression from equation 1, as a function of the extinction law, at each of the absorption lines for which the veiling was estimated (black triangles). A linear fit to the optimized subset of points is shown in red. (Middle) The derived veiling spectrum (black), with the measured values for individual absorption lines overplotted (red triangles). (Bottom) Comparison of the normalized dereddened DQ Tau spectrum (black), dereddened LkCa 21 template spectrum (blue), and original unscaled DQ Tau spectrum (red). \label{veiling1}} \end{figure} \begin{figure}[H] \plotone{dqtau_dec22_veiling.pdf} \caption{Same as in Fig.~\ref{veiling1}, for the 12/22/2012 observation. \label{veiling2}} \end{figure} \begin{figure}[H] \plotone{dqtau_dec30_veiling.pdf} \caption{Same as in Fig.~\ref{veiling1}, for the 12/30/2012 observation. \label{veiling3}} \end{figure} \begin{figure}[H] \plotone{dqtau_jan5_veiling.pdf} \caption{Same as in Fig.~\ref{veiling1}, for the 1/5/2013 observation. \label{veiling4}} \end{figure} \begin{figure}[H] \plotone{dqtau_jan8_veiling.pdf} \caption{Same as in Fig.~\ref{veiling1}, for the 1/8/2013 observation. \label{veiling5}} \epsscale{1} \end{figure} The resulting extinction estimates for each epoch are given in Table~\ref{av}. The values differ from each other, with a $>4 \sigma$ maximum deviation. Moreover, it appears that the extinction is systematically larger closer to periastron passages (see Discussion). The extinction can be independently estimated from the maximum observed $V-I$ color (in between pulse events), assuming no excess and an M0 photosphere; in that case, we derive $A_V \sim 0.9$, which is identical within the uncertainties to the minimum spectroscopically-derived value. The uncertainties quoted in Table~\ref{av} reflect the formal errors on the fit, and do not take into account possible systematic errors from any mismatch between the object and template spectra. A detailed comparison using a larger grid of templates is needed to better constrain systematics, but we currently lack the sample to do this. Nevertheless, any such systematics should affect all epochs equally, and we believe that the apparent extinction variations are likely real. \begin{deluxetable}{lcc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{\label{av}Extinction from fitted spectra} \tablehead{ \colhead{date} & \colhead{phase} & \colhead{$A_V$}} \startdata 12/13/12 & 0.51 & 1.01 $\pm 0.13$\\ 12/22/12 & 1.08 & 1.60 $\pm 0.19$\\ 12/30/12 & 0.59 & 1.21 $\pm 0.11$\\ 1/5/13 & 0.97 & 1.80 $\pm 0.22$\\ 1/8/13 & 1.16 & 1.51 $\pm 0.19$ \enddata \end{deluxetable} \subsubsection{Characterizing the excess} The excess spectra resulting from subtraction of the template from each dereddened object spectrum are shown in Figure~\ref{excesses}. There is a clear change in both the shape and strength among the different epochs, apparently correlated with the binary orbital phase. The two observations taken closest to periastron passages show the largest excess with the bluest colors, while the reverse is true for those taken farthest in time from a periastron passage. There is a similar correlation with the emission line fluxes (see next subsection). The most dramatic changes occur in the 0.8-1.5 $\mu$m range, which exhibits nearly zero continuum excess near apastron phase but increases to a nearly flat excess spectrum near periastron. Other broad spectral features are also apparent in some or all epochs, such as the ``bumps" centered roughly at 0.85, 1.05, 1.8, and 2.6 $\mu$m; these are coincident with molecular features such as TiO and H$_2$O, and are most likely indicative of slight spectral mismatches between object and template or, particularly in the latter case, artifacts introduced by imperfect telluric correction. \begin{figure}[H] \includegraphics[scale=0.7]{dqtau_excesses.pdf} \caption{The DQ Tau excess spectra derived from the five SpeX observations. The observation date and corresponding binary orbital phase (where one is the time of periastron passage) are labeled in each case. For clarity, the spectra have been offset along the y-axis by the following amounts (from top to bottom): 4, 3, 2, 1, 0. \label{excesses}} \end{figure} In order to derive a rudimentary characterization of the excess shape, we fit a set of blackbody components to each observation. The optical-NIR continuum excess in most CTTSs has several components: ``hot" emission most likely originating from the accretion shock on the stellar surface, ``warm" emission produced by dust at or near the the inner disk rim where temperatures reach the sublimation point, and ``cool" emission produced by the region of the disk behind the inner rim. Detailed studies have indicated that the picture is somewhat more complicated, with intermediate-temperature components of uncertain origin \citep[e.g.][]{2011ApJ...730...73F}. For the purposes of this study, where we are taking a first look at time variability in the excess, we restrict ourselves to blackbody models of these three basic components, following McClure13. Each component is defined by a characteristic blackbody temperature and solid angle. We fix the hot component temperature at 7000 K, which is characteristic of accretion shock emission (the exact value is not crucial here since we are only seeing the Rayleigh-Jeans tail). We also fix the cool component to a value of 600 K, which is roughly the equilibrium temperature for dust at the predicted location of the inner edge of the circumbinary disk around DQ Tau ($\sim 2.5a = 0.35$ AU). For the warm component, we allow the temperature to vary between 1000-1800 K (which spans the range of typical characteristic maximum dust temperatures in TTSs) in increments of 50 K. The solid angles (relative to the stellar value assuming $R=1.5$ R$_\odot$ and $d=196$ pc) are varied from 0 to 0.1 for the hot component, 10 to 50 for the warm component, and 50 to 300 in the cool component. For each set of parameters, the models for all three components are combined and then compared to the observed excess spectra, with the best fit determined via chi-squared minimization. The resulting best fits for each observed spectrum are shown in Figure~\ref{excesses_withfits}, with the associated parameters listed in Table~\ref{model_param}. Given the systematic uncertainties in the template matching, these values should be taken as indicative only; however, they provide a useful first look at the gross properties of the material in the inner regions of this system. The best-fit hot component solid angles range from zero at several epochs, where no discernible emission was detected, to 0.06, which is typical for weakly- to moderately-accreting TTSs. The cool component solid angles range from 90 to 260, roughly as expected for a puffed inner disk rim with a temperature of 600 K. If our assumption of a constant temperature is a reasonable approximation, the variation in solid angle may be indicative of a variable scale height of the inner edge of the putative circumbinary disk. There is a weak correlation with the warm component, but the origin of such a variation is unclear; it could simply be an artifact of our overly-simplistic assumptions of the dust temperatures. However, it should be noted that this cooler dust emission is not as well-constrained as the other components since it contributes an appreciable fraction of the total emission only at the long-wavelength end of our spectra. Observations in the 5-10 $\mu$m range are required to test the significance of any variations from this region. The warm component presents the most obvious variations. The best-fit temperatures range from 1100 K, lower than typical dust sublimation temperatures, to 1650 K, which likely corresponds to dust at the sublimation front. The hottest value coincides with a NIR photometric peak several days before a periastron passage, while the coolest value coincides with an apastron passage. The temperatures are also cooler at and just after periastron passage. This suggests a sequence of increasingly warming dust as the binary moves closer together in its orbit, with the hottest dust being removed during or just after the accretion pulse. \begin{figure}[H] \includegraphics[scale=0.9]{dqtau_excesses_bbfits.pdf} \caption{DQ Tau excess spectra shown in black, in order of observation time (left to right, top to bottom), with the binary orbital phase indicated. The top two panels and bottom three panels show results from the same binary orbit. Colored lines represent the best-fit blackbody models, with the three separate components and their characteristic temperatures shown in blue, orange, and red, and the combined model shown in magenta. \label{excesses_withfits}} \end{figure} \begin{deluxetable}{lccccccc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{Best-fit parameters for blackbody fits} \tablehead{ \colhead{date} & \colhead{phase} & \colhead{hot T$_{eff}$} & \colhead{hot $\Omega$} & \colhead{warm T$_{eff}$} & \colhead{warm $\Omega$} & \colhead{cool T$_{eff}$} & \colhead{cool $\Omega$}} \startdata 12/13/12 & 0.51 & 7000 & 0 & 1300 & 20 & 600 & 140\\ 12/22/12 & 1.08 & 7000 & 0.05 & 1250 & 21 & 600 & 140\\ 12/30/12 & 0.59 & 7000 & 0 & 1100 & 36 & 600 & 80\\ 1/5/13 & 0.97 & 7000 & 0.06 & 1650 & 11 & 600 & 260\\ 1/8/13 & 1.16 & 7000 & 0.02 & 1150 & 28 & 600 & 90 \enddata \tablecomments{Blackbody temperatures are given in $K$, solid angles are given in units of the stellar solid angle assuming a radius of 1.5 R$_{\odot}$ and distance of 196 pc.} \label{model_param} \end{deluxetable} \subsubsection{Emission lines} DQ Tau exhibits an emission line spectrum that is fairly typical for low to moderate accretors. Table~\ref{emission_fluxes} lists the measured line fluxes for the most prominent lines, including the Ca II triplet, He I $\lambda1.08 \mu$m, and H I Paschen and Brackett lines. This is not a complete list of detections; upper Brackett lines in H band appear in some epochs, but are severely blended with the copious photospheric absorption lines in that range. We also detect CO fundamental emission in M band (lines of the P and R branches) at all epochs; however, we cannot reliably measure these because of both residual telluric absorption and blending of the individual lines themselves. All of the measured line emission varies significantly with time, by a factor of 5 or more in some cases. The strongest lines are clearly correlated with orbital phase (Fig.~\ref{emission_lines}), with the flux peaking at or just before periastron passages, similar to the optical photometry. These lines are known tracers of accretion activity in TTSs, and their observed variation is fully consistent with the prior evidence for phase-dependent, or ``pulsed", accretion in the DQ Tau system. Using previously-calibrated relations between line luminosity and accretion luminosity \citep[e.g.][]{1998AJ....116.2965M}, we estimated mass accretion rates from the observed Pa$\beta$ and Br$\gamma$ line fluxes. To do this, we applied extinction corrections based on the simultaneous A$_V$ measurements from the veiling analysis, and adopted the most recent determinations of the binary stellar mass \citep[M$_* \sim 0.6$ M$_{\odot}$, dividing in half the total mass from][]{2016ApJ...818..156C} to convert from accretion luminosity to mass accretion rate. The results are shown in Figure~\ref{mdot}; the accretion rate varies by roughly an order of magnitude from $\sim 10^{-9}$ to $\sim 10^{-8} \; \msunyr$, strongly correlating with orbital phase. Our measurements are consistent with previous studies; Bary14 (also based on NIR lines) and \citet{2017ApJ...835....8T} (based on U band photometry), both with far more epochs, also showed a strong correlation between accretion luminosity and/or rate and orbital phase, although both also showed some exceptions with increased activity far from periastron. \begin{deluxetable}{lccccc} \tabletypesize{\small} \tablewidth{0pt} \tablecaption{DQ Tau emission line fluxes} \tablehead{ \colhead{line} & \colhead{12/13/12} & \colhead{12/22/12} & \colhead{12/30/12} & \colhead{1/5/13} & \colhead{1/8/13}} \startdata O I $\lambda$8446 & 4.91 0.70 & 13.5 0.30 & 5.01 0.52 & 11.9 0.4 & 4.84 0.42\\ Ca II $\lambda$8498 & 2.51 0.17 & 7.50 0.19 & 2.58 0.75 & 9.74 0.33 & 2.34 0.80\\ Ca II $\lambda$8542 & 7.50 0.78 & 7.67 0.30 & 7.88 0.31 & 7.51 0.31 & 7.82 0.08\\ Pa11 & 1.30 1.30 & $^a$ & 1.0 1.0 & 2.36 0.27 & 1.8 1.8\\ Ca II $\lambda$8662 & 3.17 0.15 & 7.62 0.14 & 5.46 0.12 & 8.45 0.12 & 5.38 0.14\\ Pa 10 & $^a$ & $^a$ & $^a$ & $^a$ & $^a$\\ Pa 9 & 1.50 1.50 & 3.94 0.38 & 1.4 1.4 & 6.06 0.34 & 0.9 0.9\\ Pa 8 & $^a$ & $^a$ & $^a$ & $^a$ & $^a$\\ Pa 7 & 1.00 1.00 & 4.52 0.12 & 2.4 2.4 & 3.63 0.27 & 0.8 0.8\\ Pa 6 & 1.40 1.40 & 7.89 0.21 & 1.2 1.2 & 8.60 0.32 & 1.04 0.26\\ Pa 5 & 4.34 0.29 & 15.23 0.65 & 2.5 2.5 & 14.1 0.5 & 2.90 0.15\\ Pa$\delta$ & 2.47 0.53 & 18.3 0.15 & 5.6 5.6 & 15.8 0.2 & 3.5 3.5\\ He I & 29.0 0.70 & 48.5 4.30 & 44.6 0.5 & 98.6 1.7 & 42.6 0.6\\ Pa$\gamma$ & 6.85 1.26 & 30.2 0.80 & 4.74 0.47 & 30.2 0.3 & 4.75 0.23\\ O I $\lambda$1.13 & 2.88 0.15 & 9.78 0.28 & $^a$ & 9.98 0.31 & 7.0 7.0\\ Pa$\beta$ & 11.0 0.44 & 48.2 0.22 & 6.09 0.14 & 40.0 0.3 & 7.38 0.19\\ Br$\gamma$ & 3.03 0.27 & 8.89 0.36 & 3.88 0.11 & 11.7 0.1 & 3.44 0.21\\ Pf$\gamma$ & 2.1 2.1 & 3.68 0.27 & 1.5 1.5 & 3.25 0.17 & 0.66 0.18\\ Br$\alpha$ & 4.21 0.15 & 16.4 0.17 & 4.27 0.04 & 13.6 0.4 & 4.76 0.19\\ \enddata \tablecomments{ In units of 10$^{-14}$ erg cm$^{-2}$ s$^{-1}$. Identical flux and uncertainty values indicate upper limits.\\ $^a$ \, Blended with nearby stellar or uncorrected telluric absorption.} \label{emission_fluxes} \end{deluxetable} \begin{figure}[H] \plotone{dqtau_emflux.pdf} \caption{Line flux vs. binary orbital phase for the four lines indicated. Black and blue diamonds represent observed and dereddened fluxes, respectively. The error bars for the dereddened fluxes include the uncertainty in the reddening value derived from the spectral fits. \label{emission_lines}} \end{figure} \begin{figure}[H] \includegraphics[scale=0.7]{dqtau_mdot.pdf} \caption{Mass accretion rate vs. binary orbital phase. Blue and red diamonds represent values derived from the measured Paschen $\beta$ and Brackett $\gamma$ line fluxes, respectively. \label{mdot}} \end{figure} The He I line at 1.083 $\mu$m is a particularly interesting diagnostic of both accretion and outflow, as it often shows both blueshifted and redshifted absorption superposed on the emission profile \citep{2003ApJ...599L..41E, 2006ApJ...646..319E}. This line is robustly detected in emission in all of our spectra, with blueshifted absorption components seen in three of the five epochs (Fig.~\ref{he1}). \citet{2006ApJ...646..319E} observed DQ Tau at high spectral resolution at an orbital phase of $\sim 1.3$, and their profile is consistent with the strength and shape of our observations at quiescent epochs. Our spectral resolution is too low to detect the blueshifted absorption they observed at velocities around -200 km s$^{-1}$. We detect a much more strongly blueshifted absorption component at $\sim 400$ km s$^{-1}$ at the three epochs closest to a periastron passage, which is significantly more blueshifted than seen in any of the T Tauri line profiles shown by \citet{2006ApJ...646..319E}. Bary14 also reported on variations of the He I blueshifted absorption component in DQ Tau as a function of orbital phase, though they do not remark on velocities (and many of their spectra have insufficient resolution to reliably detect the absorption components). The spectrum we observed on 12/22/2012 (orbital phase 1.08) exhibits a second, stronger absorption component centered at roughly {\it 750} km s$^{-1}$. Such a velocity is virtually unprecedented for any tracer of gas motions observed around any T Tauri star, and is far above the escape velocity of the system. This component does not appear to be a spurious artifact since it is clearly seen at the same velocity and depth in each of the 12 individual exposures that were averaged into the final spectral extraction. However, no absorption or emission at comparable velocities is seen in any other observed line. The appearance of this absorption very close to a periastron passage, where the magnetospheres of the two stars are likely to overlap, suggests a possible origin in some kind of flare event caused by magnetic reconnection (possibly analogous to a coronal mass ejection). We cannot make an accurate estimate of the terminal velocity since the absorption component is not spectrally resolved, but it could be as high as $\sim 1000$ km s$^{-1}$. Further observations near periastron passage at higher spectral resolution are needed to determine whether events of this type are common, and what the true origin might be. \begin{figure}[H] \includegraphics[scale=0.7]{dqtau_he1.pdf} \caption{The He I 1.083 $\mu$m line from the five epochs of SpeX spectra. Relative shifts between the spectra introduced by uncertainties in the wavelength calibration have been removed by comparing the velocities of nearby photospheric absorption lines, arbitrarily using the 12/13/2012 spectrum as the baseline. Given the range of these relative shifts, we estimate that the absolute velocity scale is accurate to $\sim 30$ km s$^{-1}$. The velocity scale was further corrected for the systemic radial velocity of 22 km s$^{-1}$. Most of the individual emission or absorption components are likely not resolved. \label{he1}} \end{figure} \section{Discussion} The combined results of our photometric and spectroscopic observations show clear correlations of the accreting gas and hot dust with the binary orbit in the DQ Tau system. This behavior is broadly consistent with the pulsed accretion scenario predicted by simulations. Our spectroscopy indicates that the cavity inside the circumbinary disk, if present, is never completely clear of dust; the minimum dust temperatures we measure ($\sim 1100$ K) around apastron orbital phases are much higher than the equilibrium temperature at the location of the disk edge expected by simulations ($\sim 600$ K). However, those dust temperatures are also clearly lower than the sublimation temperature, which suggests that there is little if any dust near the sublimation fronts of either star at those epochs. With only five epochs, the spectroscopy offers limited snapshots of the full range of behavior of DQ Tau as traced by the photometric data. Figure~\ref{model_colors} shows observed NIR colors for a subset of the photometric observations that span a time range encompassing the spectroscopy. Corresponding colors derived from the blackbody models of the excess spectra, with a scaled photospheric template spectrum added in, are also indicated; these match the contemporaneous photometry to within the relative flux accuracy of the spectroscopic data. Note that four of the spectroscopic epochs span a very small range of NIR colors, and the reddest epoch (1/5/2013) is still somewhat bluer than the peaks in the H$-$K light curve. We calculated additional sets of blackbody models to see what range of parameters would be needed to explain the maxima and minima of the NIR colors. One of these sets, using the minimum measured extinction and assuming no hot component, is shown in the lower panel of Fig.~\ref{model_colors}. Although there is degeneracy between the effects of extinction and hot accretion emission, we can make some general conclusions. One, the rise of the NIR excess prior to the onset of an accretion pulse is primarily explained by a significant increase in the dust emission solid angle (by as much as a factor of 3). An increase in the dust temperature may also occur, but the photometric accuracy prevents a definitive constraint in the absence of simultaneous spectroscopy. The rise cannot be due to a significant increase in the extinction alone, otherwise the $J-H$ color would be larger than observed. Two, the highest dust temperatures, corresponding to the expected sublimation limit, and the highest extinctions can only occur once the accretion pulse is in progress; the blue excess serves to cancel out the reddening effect at shorter NIR wavelengths, effectively putting a cap on the $J-H$ colors. At epochs with the strongest accretion emission, there is a pronounced blueing of the $J-H$ color. Near the end of a pulse, both the $J-H$ and the $H-K$ colors drop, signaling a decline in both the dust temperature and emission solid angle. Three, the warm dust component exhibits a varying minimum temperature between periods of quiescence, which may be related to the 75-day period possibly present in the NIR photometry. The minimum temperature may get as low as $\sim$900 K, still warmer than the expected temperature of the inner edge of the circumbinary disk. \begin{figure}[H] \includegraphics[scale=0.7]{dqtau_colors_models.pdf} \caption{Observed NIR colors for DQ Tau in a window around the times of the spectroscopic observations, compared to stellar plus blackbody models. The color scheme of the observed points is based on orbital phase, with green through red to brown representing the progression from apastron (phase 0.5) to periastron (phase 1), and black through blue to green the progression from apastron to periastron. (Upper panel) H$-$K color as a function of time. The light purple diamonds connected by a dotted line show the scaled B band magnitudes for reference. Large black squares represent model colors for each spectroscopic observation derived by combining the excess spectrum blackbody fits with the scaled stellar template spectrum. (Middle panel) J$-$H color as a function of time. The symbols are the same as in the upper panel. (Lower panel) H$-$K versus J$-$H colors, using the same phase-dependent color scheme as in the upper panels. The black lines represent blackbody models using the same prescription as the models fit to the excess spectra, but with no hot component and a constant extinction of $A_V=1$. Each solid line shows the locus of models for a single warm dust component temperature (from top to bottom: 1700, 1500, 1300, 1100, 900 K). Each dotted line connects models with the same warm component solid angle (from left to right: 10, 20, 30, 40, 50, 70). The blue arrows show the shift in color that would result with an added hot component with $T=7000$ K and $\Omega = 0.06$, for extinctions $A_V=1$ (left arrow) and $A_V=1.8$ (right arrow). \label{model_colors}} \end{figure} \epsscale{1} Using prior simulations as a guide, we propose the following scenario as a general framework for understanding the observations, using schematics of the system geometry as shown in Figures~\ref{cartoon_0.5}-\ref{cartoon_1.05}. Note that these present a highly idealized picture of the complex dynamical environment (for example, the circumbinary disk inner edge is likely not sharp and well-defined, and the material inside is not all strictly confined to narrow streams), but are intended to provide a rough guide to the scale of various physical regions of interest. To start, at apastron orbital phase, accretion streams falling from the inner edge of the circumbinary disk begin to approach each star (Fig.~\ref{cartoon_0.5}). Most of the material has not yet reached the sublimation radius around either star, hence the apparent dust temperature should be lower than the typical sublimation point. There is still gas left around one or both stars, located mostly between the sublimation front and corotation radius, slowly accreting onto the star(s) at a quiescent level after having been cut off from the previous orbit's streams. As the stars come closer together in their orbits, the accretion streams fall inwards, increasing the amount of gas and dust inside the cavity, and begin feeding temporary circumstellar structures (Fig.~\ref{cartoon_0.85}). The dust in the streams reaches the sublimation fronts, and increases the gas supply to the accretion flows onto the stars, resulting in an increasing accretion rate. The increase in the amount of circumstellar material, and possibly its scale height, leads to an increase in the NIR emission (and possibly the stellar extinction if the material is sufficiently stirred out of the disk plane). As the stars continue to draw closer, the circumstellar dust structures begin to interact. Near periastron phase, there is a single sublimation front around both stars, the inner gas disks become disrupted as the corotation radii slightly overlap, and the stellar magnetospheres may also interact (Fig.~\ref{cartoon_1}). This produces a spike in the stellar accretion rate. Finally, as the stars begin to separate, the circumstellar dust structures are torn apart, and the accretion streams become disconnected, leading to a drop in the NIR excess flux and characteristic temperature, and a decrease in the stellar accretion rate (Fig.~\ref{cartoon_1.05}). This generalized picture does not explain all observed characteristics. For one, the accretion of gas onto the binary never completely stops, which means there must always be some reservoir of material located near the corotation radius of one or both stars \citep[the viscous timescale at that location is of order 100 yr;][] {1998ApJ...495..385H}. Some material also likely remains around the sublimation radius, which should lead to some hot dust emission; if so, it may be that such emission is too small to detect along with the cooler dust emission that is dominant during the quiescent epochs (from the excess spectra, we estimate a conservative limit on the solid angle of a component at the sublimation temperature to be $<10$\% of the dominant cooler component). Another difficulty lies in explaining the very rapid decline in dust emission and temperature immediately after periastron orbital phase. It seems unlikely that all dust near the sublimation front gets dynamically disrupted in one or two days; again, there may be an emission component that is too small to reliably identify in the excess spectrum. Finally, not all of the monitored binary orbital cycles exhibit such a clear-cut pulse signature in either the optical or NIR wavelength range. A few cycles show very complicated photometric behavior, with multiple peaks in both the optical and NIR. These may indicate a breakdown of the regular two-armed accretion stream infall, perhaps by turbulent clumps accreted from the circumbinary disk. In any case, more detailed simulations, and calculation of observables from them, are needed to clarify specific predictions from the pulsed accretion theory. In cycles with a strong NIR peak, the observed $J-H$ colors rule out the presence of a significant amount of dust near the sublimation front ($T \sim 1600$ K) prior to the start of the stellar accretion pulse. The fact that we measured a 1650 K component in one spectrum during an accretion pulse suggests a connection between the sudden increase in dust temperature and the pulse itself. This could simply be a result of the sudden expansion of the sublimation radius to encompass both stars when they move sufficiently close together (as shown in Fig.~\ref{cartoon_1}). There may also be a concurrent ``puffing up" of the material near the sublimation front as irradiation heating from the accretion luminosity increases. Enhanced outflow related to the accretion pulse may also entrain more dust, raising the effective solid angle of the hottest dust \citep{2012ApJ...758..100B}. Another possibility is the formation of shocks as the accretion streams collide with other material in the cavity, as seen in simulations \citep[e.g.][]{2016ApJ...827...43M} and observed via H$_2$ emission in the wide binary GG Tau \citep{2012ApJ...754...72B}; such shocks could potentially heat the circumstellar dust and increase its scale height. Our simplistic blackbody fits infer a very large solid angle for the warm dust component near NIR photometric peaks prior to an accretion pulse, with $\Omega \sim 50 \, \Omega_*$ or more for $T_{warm} \geq 1200$ K. This is much larger than inferred for single T Tauri stars (McClure13), and may be indicative of a higher scale height for the temporary circumstellar dust structures that form in the DQ Tau system close to periastron orbital phase, as might be expected from dynamical stirring of the inner disk material by the binary motion. However, given the very different geometry, a detailed comparison with the standard accretion disk model requires analysis of circumbinary disk simulations, which is beyond the scope of this work. Several previous investigations of warm material in the DQ Tau system support our findings. \citet{2001ApJ...551..454C} characterized CO fundamental emission line profiles, finding a CO excitation temperature of 1200 K. Assuming a simple Keplerian disk model, they estimated an emitting region in the range $\leq 0.1$ to $\sim 0.5$ AU, which is at least partly co-located with the warm dust component. \citet{2009ApJ...696L.111B} used Keck NIR interferometry to resolve K-band emission in the system, placing it at roughly 0.15-0.2 AU, which again is consistent with our inferred location of the warm component. In contrast, \citet{2018ApJ...862...44K} recently presented a somewhat different characterization of the inner disk region, based on contemporaneous observations from the {\it Kepler K2} mission and the {\it Spitzer} Space Telescope. Their {\it Spitzer} data included 3.6 and 4.5 $\mu$m photometry obtained at a roughly daily cadence spanning slightly less than one orbital period, and covering most of one accretion pulse event; the light curves in both bands correlated with the {\it Kepler} optical data, exhibiting an increase in flux between orbital phases of about 0.8 - 1, and peaking a few days before periastron. There is no evidence of a lag between the infrared and optical data, unlike what we have found for most pulse events at shorter infrared wavelengths. However, the {\it Spitzer} observations began when the optical pulse was already underway, so it is possible the onset of the infrared flux increase may have occurred before that of the optical flux. We also note that their measured $[3.6]-[4.5]$ colors get progressively bluer before periastron, and redder thereafter, which is qualitatively consistent with the dust temperature changes we have inferred from our data. Based on the {\it Spitzer} colors, Kospal et al. inferred a characteristic dust temperature of 917 K near periastron passage, declining to 825 K a few days later. These temperatures are significantly cooler than our spectroscopically-derived values; we believe their estimates to be in error because they are based on single-temperature blackbody fits to two photometric points spanning a limited wavelength range. As we have shown, the infrared excess emission shape can only be explained with a range of dust temperatures (even the two dust components we adopted are likely overly simplistic compared to reality), and the {\it Spitzer} photometry by itself is relatively insensitive to any warmer dust component. Since the {\it Spitzer} data coincided with an optical accretion pulse, there must have been material closer to the stars than this location in order to feed the stellar accretion flows, thus warmer dust was almost certainly present. Given the characteristic dust temperatures they inferred, Kospal et al. estimated a physical location for the emitting material at about 0.13 AU. Assuming this location corresponds to the inner edge of the circumbinary disk, they interpreted the increase in characteristic dust temperature near periastron as due to increased irradiation by the accretion pulse. Our results suggest that changes in the location and geometry of the emitting material are also important, if not dominant, in setting the level and shape of the dust emission. In addition, the warmer dust temperatures we infer ($>1100$ K) indicate that most of this material is unlikely to be associated with any stable circumbinary disk since it is located in a region where the binary torques are very strong. Our data show ambiguous evidence of the effects of variable irradiation on the warmest circumstellar material, except perhaps in the case of the extremely large pulse we observed near MJD-2456630. Nevertheless, further NIR spectroscopy tracing the onset and growth of the NIR photometric peaks is required to better understand the relative roles of dust heating and surface area. As mentioned above, accretion pulses do occasionally occur at phases other than periastron. We observed several examples of weak pulses a few days after periastron passage in the 2013-2014 season, and one cycle with four roughly equally spaced peaks. Other studies have also seen pulses occurring close to apastron phase \citep[Bary14;][]{2017ApJ...835....8T, 2018ApJ...862...44K}. The origin of these events is unclear; they may represent stochastic accretion events related to clumpy circumstellar material, as often seen in classical T Tauri disks \citep{2014AJ....147...82C, 2014AJ....147...83S, 2016AJ....151...60S}. \citet{2017ApJ...835....8T} suggested the possibility that the stars may occasionally cross through remnant streams from a previous cycle, as seen in some simulations. Interestingly, the cycle we observed containing four optical peaks occurred after the extremely large pulse near MJD-2456630, and also exhibited the reddest $H-K$ baseline level, indicative of a significant amount of residual material inside the circumbinary disk. The correlation between the $H-K$ minima and periastron passages with weak or no optical pulses suggests a link between pulse strength and the amount of material dragged inward by the accretion streams. The source of the 75-day periodicity may be a result of dynamical processes in or near the inner edge of the circumbinary disk. For example, some simulations have shown that the disk cavity can become eccentric and/or develop repeated azimuthal density enhancements \citep{2012ApJ...749..118S, 2017MNRAS.466.1170M}. The former effect is unlikely to lead to the observed variability since the eccentricity precesses on a much longer timescale, of order years. In the latter case, when a density ``lump" rotates into one of the points from which the streams originate, it can lead to an increase in the surface area and accretion rate in the stream. If the 75-day periodicity is tracing orbital motion, the source would be located at $\sim 0.37$ AU, very similar to the predicted location of the lump at $\sim 3a$, which corresponds to $\sim 0.39$ AU in DQ Tau. However, the simulations to date find that such density enhancements form only in binaries with very low eccentricities, which is not the case for the DQ Tau system. Finally, our finding of an increase in the extinction near periastron phase has implications for interpretation of optical light curves. Some of the complicated structure seen at shorter timescales \citep[e.g.][]{2017ApJ...835....8T, 2018ApJ...862...44K} may be due at least in part to variations in the exinction along the line of sight. Because of the degeneracy between extinction and accretion excess, it is impossible to reliably disentangle the two effects with photometry alone. Spectroscopy is essential to breaking this degeneracy by using the method of veiling characterization as a function of wavelength. \begin{figure}[H] \centering \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.5\columnwidth]{phase_05.pdf}}% }\qquad \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.4\columnwidth]{dqtau_2012_phase_05.pdf}}% } \caption{(Left) A schematic of the DQ Tau system at apastron orbital phase. The position of each star in its orbit is indicated by the blue dots, and the stellar orbits are represented with black ellipses. The corotation radius around each star, as estimated from the observed stellar rotation period (Table~\ref{params}), is indicated with the green circles. The red circles represent the theoretical dust sublimation radius around each star, calculated assuming T $=$ 1650 K and negligible accretion luminosity. The yellow circles show the approximate location of dust with an equilibrium temperature of 1100 and 1300 K, again assuming negligible accretion luminosity. The inner edge of the putative circumbinary disk and accretion streams are shown in orange/red, and dust-free accreting circumstellar gas is shown in gray. All sizes are to scale. (Right) A selection of the B band and H-K color light curves, with the orbital phases corresponding to the schematic indicated with dotted vertical lines. \label{cartoon_0.5}} \end{figure} \begin{figure}[H] \centering \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.5\columnwidth]{phase_085.pdf}}% }\qquad \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.4\columnwidth]{dqtau_2012_phase_085.pdf}}% } \caption{Same as figure~\ref{cartoon_0.5}, for an orbital phase of 0.85. Quasi-stable circumstellar disks begin to develop as material is funneled inward, leading to a rise in the warm dust emitting area and characteristic temperature. \label{cartoon_0.85}} \end{figure} \begin{figure}[H] \centering \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.5\columnwidth]{phase_10.pdf}}% }\qquad \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.4\columnwidth]{dqtau_2012_phase_10.pdf}}% } \caption{Same as figure~\ref{cartoon_0.5}, for periastron orbital phase. During closest approach the circumstellar dust structures coalesce around both stars, and a burst of accretion occurs as the inner circumstellar gas is disrupted and rapidly falls onto the central stars. The sublimation front indicated here was calculated assuming the combined luminosity of both stars and a contribution from the accretion luminosity, given as the maximum measured level from our emission line observations. \label{cartoon_1}} \end{figure} \begin{figure}[H] \centering \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.5\columnwidth]{phase_105.pdf}}% }\qquad \subfloat{% \raisebox{-0.5\height}{\includegraphics[width=0.4\columnwidth]{dqtau_2012_phase_105.pdf}}% } \caption{Same as figure~\ref{cartoon_0.5}, for an orbital phase of 1.05. As the stars move further apart, the circumstellar dust structure is disrupted, leading to a drop in the infrared excess strength and characteristic temperature, as well as the accretion rate onto the stars. The two yellow circles mark approximate locations for dust at T$=$1100 and 1300 K, assuming the combined stellar luminosity and negligible accretion luminosity. \label{cartoon_1.05}} \end{figure} \acknowledgements We acknowledge Steve Lubow, Jeff Bary, Ben Tofflemire, Bob Mathieu, and Bo Reipurth for helpful discussions and encouragement. J. M. extends a special thanks to the staff at IRTF, particularly Alan Tokunaga and John Rayner, for their generous scheduling flexibility and peerless remote observing support. \bibliographystyle{aasjournal}
{ "timestamp": "2019-04-16T02:03:15", "yymm": "1904", "arxiv_id": "1904.06424", "language": "en", "url": "https://arxiv.org/abs/1904.06424" }
\section{Introduction} In the last two decades growing attention has been dedicated to the understanding of RNA. As for proteins, RNA structure and function are closely tied and play a determining role in many biomolecular processes such as the splicing process, transcriptional and translational machineries, and RNA localization and decay \cite{ReviewA}. Despite this importance, the number of available experimental RNA structures at an atomic level stored in public databases such as the Protein Data Bank (PDB) \cite{PDB} or the Nucleic Acids Database (NDB) \cite{NDB} remains limited due to challenging experimental problems related to the preparation and/or crystallization of RNAs that are usually more flexible and dynamic with respect to proteins \cite{ReviewB}. Currently, more than 90\% of structures stored in the PDB database \cite{PDB} are proteins, while less than 5\% of the human genome encodes for proteins. This discrepancy has stirred the curiosity of scientists and lead to the remaining 95\% of the human genome sometimes being referred to as the dark matter of the genome \cite{DarkMatter1, DarkMatter2}. To overcome the lack of structurally resolved RNA, computational methods have complemented experimental efforts to get more insight into how RNA structure and dynamics determine its functions \cite{Firstpaper,Join,Pappa}. Significant efforts have been devoted to the construction of methods to predict the RNA secondary structure mainly employing thermodynamics-based models \cite{2DReview}. These methods have been recently achieved significant improvements by the incorporation of auxiliary structural information from high-throughput chemical probing technologies \cite{Join1,Join2}. However, even if the knowledge of the RNA secondary structure provides important information, it is not sufficient to fully explain RNA function or interactions with other biomolecules \cite{Computational3D}. During the last years lot of attention has been focused on the construction of RNA 3D structure prediction tools of increasing accuracy and speed \cite{CompModelingI, Adamiak, ModeRNA, Bujnicki, Chen, Das, iFold, 3dRNA, Baker, Altman, Alex, MCFOLD, ReviewI}. In this review we provide a concise overview of these methodologies, present their strengths and limitations, and highlight the open challenges in RNA structure prediction. We will particularly underline the recent development related to the use of coevolutionary information to improve the accuracy of the RNA 3D structure prediction methods. The structural information remains, however, static and provided one piece to the puzzle of RNA function. Another important component is the dynamics of RNA, for example while undergoing large conformational rearrangements \cite{RNA_Dyn_I, RNA_Dyn_II}, which is exhaustively covered in the excellent review \cite{ReviewBIG}. \section{From the RNA sequence to its 3D structure} The basic unit of RNA is the nucleotide that is formed by planar aromatic rings linked to a ribose unit that in turn is attached to a phospate group (see fig 1). The sequence of the different constituent nucleotides (adenine, guanine, cytosine, uracil) of a given RNA molecule is defined as its \textbf{primary structure}. Nucleotides typically complement each other by forming the canonical base pairs A-U and C-G, which maximizes inter-nucleotide hydrogen bonding. This leads to short chains of nucleotides folding in anti parallel double helices. The nucleotides that do not form Watson-Crick base pairs can remain unpaired or establish less stable non-canonical base pairs forming internal and bulge loops, hairpins and junctions. The \textbf{secondary structure} is thus essentially defined as the set of base pairs occurring in the RNA molecules. The \textbf{tertiary structure} is the complete set of three-dimensional coordinates of all atoms of the RNA structure. This includes formation of a plethora of tertiary motifs such as pseudoknots, stacking of helices, multiple base pairing, ribose zipper and loop-loop interactions that determine the molecular shape in the physical space. An accurate computational prediction of the RNA tertiary structure starting from its sequence is particularly challenging as the 3D structure depends not only on the sequence but also on the environmental conditions such as the ion concentrations and temperature. \begin{figure}[!h \begin{center} \includegraphics[width=17cm]{RNA_002.pdf} \end{center} \caption{From primary (sequence), to secondary and to tertiary RNA structures.} \end{figure} \section{Computation modeling of RNA 3D structure} Here, we review and compare some widely known methods for the prediction of the three-dimensional structure of RNA. The available approaches can be roughly divided in three different types: fragement-based, physics-based and comparative modeling. To compare the state of the art prediction methods and assess their performance, a blind experiment for the RNA 3D structure prediction has been established in the last years \cite{RNA-PuzzleI,RNA-PuzzleII,RNA-PuzzleIII} with the last round focused on the challenging prediction of six RNA structures of riboswitches and ribozymes \cite{RNA-PuzzleIII}. \subsection{Fragment-based homology methods} The main idea behind this approach is to assemble the 3D prediction of target molecules using small fragments from libraries with similar sub-sequences. The theoretical justification of such a procedure comes from assuming that the distribution of the different conformations observed in known RNA structures for given fragment sequences is a good approximation for the conformation of similar or identical sub-sequences. The basics steps of these methods consist first in the fragmentation of the secondary structure used as input. As a second step a search algorithm is employed to match these elements from fragment libraries constructed from databases of known RNA structures. Finally, all the elements are assembled together using different algorithms (see below) and, usually, a final refinement stage using atomic force field or coarse-grained potentials is performed. One advantage of these methods is their computational efficiency as the fragmentation assembly drastically reduces the conformational search space. As the structural diversity of the fragment library directly limits the accuracy of the composed assembly, good results require a large and diverse library as well as a good scoring function. Here, we list methods belonging to this class and some of their characteristics. \begin{itemize} \item \textbf{RNAComposer} \cite{Adamiak}: after the fragmentation step, the predicted secondary structure elements (stem, loops and single strands) constitute the input pattern for a search in the FRABASE 3D fragment data-set developed by the authors. From the matched elements a 3D structure is constructed by first superimposing and then merging them. Finally, an energy minimization is performed in the CHARMM force field \cite{CHARMM}. \item \textbf{Vfold3D} \cite{Chen} uses a coarse-grained representation of the RNA. First, it utilizes VFold2D, a free energy-based model, to predict the secondary structure from which it extracts motifs (helices, hairpin loops, internal loops,...). From these motifs it searches the best template in the VFoldMTF database. After assembling the 3D structural motifs and the addition of all atoms to the coarse-grained structure (according to the template) it performs an all-atom structure refinement. \item \textbf{3dRNA} \cite{3dRNA} uses a two-steps procedure where first the smallest secondary elements (SSEs) are assembled in hairpins and duplexes one by one following the 5' to 3' end direction. Then, these structures are further assembled into a complete tertiary structure by selecting the junction component from a junction database. Finally, to assure the chain connectivity, the assembled model is energy minimized in the AMBER 98 force field \cite{AMBER}. \item \textbf{FARNA} \cite{Baker} (Fragment Assembly of RNA) also uses a coarse-grained representation of the RNA structure and a fragment assembly strategy employing a Monte Carlo process that is guided by a low-resolution knowledge-based energy function. The authors developed knowledge-based base-pairing and base-stacking potentials to which they add several other terms such as the penalty for steric clashes. The structural model undergoes a second refining step in an all-atom potential to improve the accuracy and to better discriminate competing structural models. The two-step protocol is called \textbf{FARFAR} (Fragment Assembly of RNA with Full Atom Refinement) and is part of the ROSETTA package. \item \textbf{MC-Fold/MC-Sym} \cite{MCFOLD} this pipeline uses the combination of small motifs called nuclotide cyclic motifs (NCMs). The NCM-3D fragments are assigned to the given sequence by choosing the structure with higher probability of occurencies. Then the structural NCMs are concatenated using a Las Vegas probabilistic algorithm. \end{itemize} \subsection{Physics-based methods} In contrast to the previous methods, the physics-based models do not use template structures in the assembly of RNA fragment/motifs but derive and parameterize energy functions depending on specific conformations, similar to approaches applied for proteins \cite{schug2003,schug2005}. These methods can be further separated in \emph{ab-initio} approaches or knowledge-based approaches. In the latter methods, the energetic functions are derived using the inverse Boltzmann law from the probability of occurrences of certain sequence-structure elements in a dataset of known structures. In constrast, the \emph{ab-initio} methods are based on force fields adopting usually harmonic potentials for bond lengths and angles, Lennard-Jones potentials for Van der Waals interactions, and electrostatic potentials that get reparameterized based on RNA structure and thermodynamics data. Such energetic functions are then used in Molecular Dynamics (MD) simulations or Monte Carlo (MC) minimization often associated with enhanced sampling techniques such as temperature replica exchange or discrete molecular dynamics simulations in which the energy function is substituted with discrete step function potentials that drastically reduce the computational cost of the method. The strength of the physics-based methods is that they are applicable to sequences with no known similar sequences or even sub-sequences. Their disadvantage is their need to explore a large conformational search space which increases computational demands and decreases their computational efficiency in comparison to fragment-based methods. Here in the following the list of the computational tools that use this approaches. \begin{itemize} \item \textbf{iFoldRNA} \cite{iFold} uses a simplified "three-bead per nucleotide" representation of the RNA structure, and it is based on a replica-exchange discrete molecular dynamic (DMD) simulations protocol to span the conformational space. DMD incorporates base-pairing and base-stacking interactions into an energy function where in addition an entropic estimation of the loop formation is also considered. \item \textbf{NAST} \cite{Altman} uses coarse-grained representation of RNA considering one quasi-atom per RNA nucleotide base. A simplified knowledge-based energy function, derived from the observed RNA geometries at the nucleotide level, is used to predict the target structure by global energy minimization. NAST necessitates as input the (known or predicted) secondary structure information and accepts also tertiary contacts to guide the folding. \item \textbf{SimRNA} \cite{Bujnicki} uses a coarse-grained representation of the RNA structures reducing the number of explicitly represented atoms per residue from about thirty to only five. It is based on dedicated RNA statistical potentials to compute the structure free energy and identify the native structure via Monte Carlo sampling. \end{itemize} \subsection{Comparative homology-based modeling} Another type of methods uses homology modeling approaches by identifying structurally related template and geometrically aligning residues from the target onto corresponding residues in the template. Examples of this type of methods are \textbf{RNABuilder} \cite{ComparativeI} and \textbf{ModeRNA} \cite{ModeRNA}. The latter makes also extensively use of the evolutionary information by using multiple RNA sequence alignments to better reveal patterns of conservation that improve the accuracy of the prediction starting from the 3D template. In order to improve the accuracy of homology-based methods it has been shown that the addition of multiple templates can be successfully employed. Another characteristic that is common to this type of methods is that they model (short) regions with no template by employing fragment-based insertion approaches. Finally the methods perform usually a geometry optimization using a force field in order to obtain physically reasonable conformations. The RNABuilder method for example uses a multi-resolution approach that handles at different level of resolution the forces, rigidifying certain bonds, residues or molecules part while keeping flexible the others. The major drawback of this class of methods resides in the difficulty of having the template structure for the given sequence and an informative multiple sequence alignment. Indeed for the RNA structure templates there is the limitation of the number of structure deposited in the widely known database \cite{NDB}. Regarding the alignments one can use those available for many RNA families in the Rfam database \cite{Rfam} or perform alignment via commonly used multiple RNA sequence alignment packages such as R-Coffe \cite{RCoffe}, Muscle \cite{Muscle} or Infernal \cite{Infernal}. The strength of these methods is their high accuracy with modest computational costs when good structural templates can be found. Their performance drops in the absence of such templates. \subsection{Performance assessment and RNA-Puzzles prediction} Most of the analyzed RNA structure prediction methods participated in the RNA-Puzzle competitions \cite{RNA-PuzzleI,RNA-PuzzleII,RNA-PuzzleIII} in which a set of experimentally resolved RNA 3D structures had to be blindly predicted. To assess the performance of the predictors and rank the models, different metrics have been used such as the root mean square deviation (RMSD) between the predicted the experimental crystal structures that gives a more global information about the model's accuracy or the deformation index and the complete deformation profile matrix that instead capture the "local" accuracy at the nucleotide interaction level. In the first RNA-Puzzle round \cite{RNA-PuzzleI} in addition to two simple small targets that were relatively well predicted, the more challenging riboswitch structure were not accurately reproduced with a mean RMSD accuracy of about 15\, \AA. Moreover while most methods achieve good performance on Watson-Crick base pairs, non-Watson-Crick interactions remain difficult to predict and clash score remains generally quite high. In the second RNA-Puzzle round \cite{RNA-PuzzleII} the best RMSDs for a long nucleotide sequence range between 6.8 and 11.7\, \AA\, indicating a global improvement of methods' performance. A substantial amelioration for non-Watson-Crick interactions prediction is also observed. Finally in the last RNA-Puzzle competition \cite{RNA-PuzzleIII}, the predictions achieved a consistently high level of accuracy especially when a high-homology template can be identified. For example in the case of the SAM-I riboswitch aptamer prediction that has as template (PDB code 3QIR), the average RMSD over all predicted models is about 4.3 \AA , with a standard deviation of less then 2 \AA . Unfortunately, when the homology with the template is not high enough, the accuracy of methods is still not satisfactory and depends on the length of the RNA sequence. Small RNA sequences can be predicted with good accuracy as exemplified in the case of the ZTP riboswitch predicted with an averaged RMSD of about 6 \AA\, and as also shown in the previous RNA-Puzzle round. For long sequences such as the \emph{ydaO} riboswitch no method is capable of reliably predicting the native three dimensional conformation with an average RMSD of about 16 \AA. In order to improve the structure prediction of these challenging targets, there is a need for new and more performing algorithms. In the next section we will thus present recent progress in this direction and more in detail we will show how the coevolutionary information can been used to improve significantly the methods' accuracy. \begin{figure}[!h \begin{center} \includegraphics[width=16.5cm]{RNA_001_04.pdf} \end{center} \caption{3D RNA structure prediction methods and their principal characteristics} \end{figure} \section{Including evolutionary information to improve 3D structure prediction} \subsection{Residue co-evolution and contact prediction} A significant amount of data obtained from high-throughput sequencing technologies provides us an invaluable source of evolutionary information that can be used in order to improve the protein \cite{Weigt2009, Schug2009, Marks, Weigt, Dago2012} and RNA structure prediction \cite{Alex, MarksIII}. The basic idea behind these approaches is tracing co-variation of amino acid or nucleic acid pairs in proteins and RNA belonging to homologous families. Such co-variation indicates structural proximity of the involved residues and is hence related to biomolecular structural and stability properties. Compensatory mutations occur when a mutation with a detrimental effect at a given site, interact with a secondary mutations at another site to restore the molecular fitness \cite{Dimitri} thus indicating the tendency of co-evolving residues to represent physical interactions that are important for the stability and function of biomolecules. In the last decade many statistical methods have been developed to identify co-evolving residue pairs in a multiple sequence alignment (MSA) \cite{Alfonso}. One can assume that such correlation occurs due to the spatial proximity of the two residues even if it can also arise from indirect effects related to the transitivity of the interaction between pairs and tertiary residues. The use of the statistical methods such as maximum entropy models (MEM) or direct coupling analysis \cite{Alex, Weigt2009, Dago2012, Weigt} allows to unravel the transitive effects in the network of constrained residue-residue interactions and thus they give more efficient and robust contact-prediction. Using these statistical-based approaches one can detect long-range tertiary contacts from sequence covariation whose prediction difficulty has been one of the main limitation to the advancement of the computational RNA 3D structure prediction methods. \begin{figure}[!h \begin{center} \includegraphics[width=16cm]{RNA_003_02.pdf} \end{center} \caption{Statistical-based contact prediction from coevolutionary data improved the 3D RNA structure prediction} \end{figure} \subsection{Direct coupling analysis (DCA)} The basic assumption of this method is to associate the probability of observation $P(\sigma)$ of a given sequence $\sigma$ = ($a_1$, $a_2$ ...$a_L$) of length $L$ in a MSA to the Hamiltonian energetic function $H(\sigma)$ using the Boltzmann law \begin{equation} P(\sigma)=\frac{1}{\mathcal{Z}}e^{- \beta H(\sigma)} \end{equation} \noindent where $\beta$ is the temperature and $\mathcal{Z}$ is the partition function of the system, and where the Hamiltonian is assumed to have the following simplified form \begin{equation} H(\sigma) = -\sum_i^L h_i (a_i) -\sum_i^{L-1}\sum_{j=i+1}^{L} J_{ij} (a_i,a_j) \end{equation} consisting only of single site terms, \emph{i.e.} $h_i (a_i)$, and residue pair interactions $J_{ij} (a_i,a_j)$. These parameters can be inferred from the MSA using a plethora of different approaches. For example in \cite{MarksIII} a pseudo-maximum likelihood (pmlDCA) approximation have been employed, while a computational intensive message-passing algorithm (mpDCA) is used in \cite{Weigt2009} and a more efficient mean field algorithm (mfDCA) in \cite{Weigt}. The list of other type of popular algorithm used in the inverse inference step can be found in \cite{Alex}. There are also pitfalls. Frequently, some species are over-represented in the MSA, e.g. because of their medical importance or the ease of handling them experimentally. Thus, these sequences need to be re-weighted. In addition, the quality of the MSA such as the proper placement of gap regions influences contact prediction accuracy. This loss of contact prediction precision directly leads to a decreased quality of 3D prediction. Another drawback is that the DCA prediction of the tertiary contacts is far from being perfect with only a modest overall true positive (TP) predicted contacts; it should be noted, however, that only relatively few (O(10)) higher ranking pairs that show higher TP rate are already sufficient to boost the performance of the structural modeling. Still, these methods significantly boost performance without too much computational effort. \subsection{Contact guided 3D RNA-structure prediction} While the use of coevolutionary data has been already fruitfully applied to protein structure determination during the last decade \cite{Marks,MarksI,Weigt2009,Schug2009,Dago2012,Weigt,Baker222,Sulkowska,Jones}, the contact guided prediction of the three-dimensional RNA structure is relatively new. Indeed, the previous mutual information (local) approach to the extraction of coevolution signals from MSA was not sufficiently accurate \cite{West} to provide reliable tertiary contact predictions. Recent investigations \cite{Alex,MarksIII,Wang} instead show that the use of a global approach to extract the top-ranked site-pairs with stronger co-evolutionary signals can be efficiently employed as distance constraints in modeling tools. In \cite{Alex} the authors show that in the prediction of the structure of six representative riboswitches with the Rosetta-based method FARFAR, the use of predicted tertiary contacts by mfDCA improves the RMSD in average by about 30\% with respect to the case in which only secondary structure information (SSI) is provided. In figure 4 we report this explicit comparison for all the six structure considered. \begin{figure}[!h \begin{center} \includegraphics[width=15.5cm]{RNA_004_2.png} \end{center} \caption{(a) DCA contact-guided RNA structure prediction improvement with respect to the state of the art (Rosetta-based) method for the six riboswitches from \cite{Alex}. (b) Overlay of the DCA-contact guided predicted (blue) and the experimental structure (green) for the thiamine pyrophosphate-specific (TPP) riboswitch (PDB code 2gdi). In the prediction, the first 100 top contacts as computed via mean field DCA from the MSA of the RF00059 family have been used as constraints in the FARFAR method.} \end{figure} These results have been confirmed in \cite{Marks} where the authors show a significant improvement of prediction quality when the evolutionary based contact prediction computed via the pmlDCA approach. In this work, contacts are used as spatial constraints in the NAST coarse-grained structure prediction method. A further confirmation in \cite{Wang} highlights that prediction RMSDs for the same structures as analyzed in \cite{Alex,Marks} are lowered by about 30\% when using the tertiary contacts predicted via mfDCA in the 3dRNA method \cite{3dRNA} compared to not using such tertiary contact constraints. \cite{Marks} and \cite{Zhou} also demonstrated how DCA-based methods show good accuracy in the prediction of intermolecular RNA-protein contacts. \section{Future challenges and outlook} Even if in the last decade tremendous advances has been achieved in RNA structure prediction, its accuracy still is not as high as for protein structure prediction. Moreover there are open and intriguing challenges in the field that will hopefully be tackled in the close future: \begin{itemize} \item The role played by the environmental conditions such as ions that strongly influence the RNA structure has to be fully investigated and clarified \cite{IONSI,IONSII,IONSIII}. Since \emph{in vivo} RNA can adapt different conformations with respect to \emph{in vitro} ones, this will be also important to understand such differences and give important information for RNA biology. \item In the next years, thanks to the advancement of the next generation sequencing technologies, the amount of sequence information will continue to increase exponentially. Currently coevolutionary methods focus on the prediction of two-site interactions (contacts), but this increased amount of information promises to also allow to predict higher order correlations that could further boost structure prediction methods. \item Further improvements of RNA force fields will continue to increase the accuracy of predictions. These can help to better understand the role of the different RNA conformations, their stability and to gain new insights about the RNA structural dynamics. \item Combining structure prediction methods or simulations with experimental data such as Selective 2′-hydroxyl acylation analyzed by primer extension (SHAPE) \cite{Kirmizialtin2015}, Fluorescence Resonance Energy Transfer (FRET) \cite{Reinartz2018} or small angle X-Ray scattering (SAXS) \cite{JPCI, Weiel2019, JMB} will allow to probe RNA structures where a single method fails \cite{COMPMEETEXP}. \item Inter-molecular protein interactions and contacts can be predicted via DCA and related methods \cite{szurmant2018}. This could be transferred to RNA. \item Finally, it becomes more and more clear that base modifications such as the methylation or deamination play an important role in RNA biology by modifying the structure as well as the function of RNA. It could be thus of great interest in the next future to address and investigate these (epi)transcriptomics data to better understand all biological processes in which the RNA is involved. \end{itemize}
{ "timestamp": "2019-04-16T02:09:09", "yymm": "1904", "arxiv_id": "1904.06514", "language": "en", "url": "https://arxiv.org/abs/1904.06514" }
\section{Introduction}\label{Sec:Introduction} \IEEEPARstart{F}{iber} nonlinearities are considered to be one of the limiting factors for achieving higher information rates in coherent optical transmission systems \cite{EssiambreJLT2010}. Advanced modulation formats with geometric and probabilistic shaping have been extensively explored with the aim of increasing achievable information rates (AIRs) \cite{Karlsson:09,AgrellJLT2009, TobiasJLT16}. Meanwhile, signal shaping has also been considered to mitigate the effects of fiber nonlinearities \cite{Shiner:14,Kojima2017JLT,El-RahmanJLT2018,BendimeradECOC2018,BinChenJLT2019}. Polarization multiplexing (PM) naturally allows modulation on a four-dimensional (4D) space, which has the potential to increase achievable information rates when the modulation is truly designed in 4D. Conventional PM-formats such as PM-$M$QAM, however, are only optimized per two dimensions independently, and thus, do not exploit all the available degrees of freedom. Several power-efficient modulation formats have been proposed using sphere-packing and lattice constructions in 4D and 8D space \cite{AgrellJLT2009,Karlsson:09,KoikeAkinoECOC2013,Millar:14}. These designs, however, aim at optimizing the minimum Euclidean distance (ED) of the constellation, and thus, they are optimum only for asymptotically high SNR, in the linear additive white Gaussian noise (AWGN) channel, and for uncoded metrics such as symbol- and bit-error probability only \cite[Sec.~IV-A]{AlexTIT2018}. Some of these multidimensional (MD) modulation formats were also shown to give high mutual information (MI), but are not well-suited for coded systems based on a bit-wise decoder such as bit-interleaved coded modulation (BICM), i.e., their generalized mutual information (GMI) is quite low \cite{Alvarado2015_JLT}. MD constant-modulus modulation formats have been proposed \cite{Chagnon:13,ReimerOFC2016,Kojima2017JLT} to mitigate the nonlinear interference. One example of this is the 4D 64-ary polarization-ring-switching (4D-64PRS) format we recently proposed in \cite{BinChenJLT2019}. 4D-64PRS was shown to outperform other modulation formats at SE of 6 bit/4D-sym by jointly optimizing the coordinates and labeling. 8D modulation formats have twice as many degrees of freedom, and thus, can improve the AIRs and nonliearity tolerance. The 8 dimensions can be obtained by two frequencies \cite{ErikssonECOC2013} or two consecutive time slots \cite{KoikeAkinoECOC2013,Shiner:14}. Our work builds upon the polarization-balancing concept, proposed for a spectral efficiency (SE) of 2 bit/4D-sym in \cite{Shiner:14}. This concept was further investigated in terms of the SE and nonlinearity tolerance trade-off in \cite{El-RahmanJLT2018,Bendimerad:18}. All the previous works using this concept only consider PM-QPSK with added constraints, and thus, only 8D formats at SE below 4 bit/4D-sym were considered. Generalizing those formats to higher SEs is nontrivial, specially when both the constellation and its binary labeling are taken into account. In this paper, we propose an approach to construct two nonlinearity-tolerant modulation formats with a SE of 5.5 bit/4D-sym. The formats are based on set-partitioning 4D-64PRS in two consecutive time slots. The first format is suitable for a high code rate coded modulation system. The second is well-suited for lower code rates and also exhibits higher nonlinearity tolerance. Numerical simulations demonstrate increased nonlinearity tolerance and transmission reach increase with respect to other modulation formats. \vspace{-0.5em} \section{8D Polarization-ring-switching Formats}\label{sec:design} In optical transmission systems, the performance of a given modulation format is determined by its tolerance to both nonlinear interference arising from the Kerr effect, and accumulated amplified spontaneous emission noise. Therefore, designing modulation formats which increase the AIRs in the presence of linear and nonlinear impairments is crucial. In \cite{BinChenJLT2019}, we designed the 4D-64PRS format with SE 6~bit/4D-sym, which has a constant modulus and an optimized binary labeling. 4D-64PRS provides excellent linear gain and nonlinear gain with respect to other modulation formats at the same SE. The structure and binary labeling of 4D-64PRS is shown in Fig.~\ref{fig:4D_64_modulation_label}. The bits $b_1,b_2,b_4,b_5$ determine the two 2D quadrants while $b_3,b_6$ determine the actual transmitted symbol. \begin{figure}[!tb] \centering \scalebox{0.9}{ \includegraphics[scale=1]{./4D64PRS_PolX.pdf}\hspace{0.5em} \includegraphics[scale=1]{./4D64PRS_PolY.pdf} } \vspace{-0.5em} \caption{2D-projections of 4D-64PRS and its binary labeling. The rings are given by $R_1^2={\nu_1^2+\nu_3^2}$ and $R_2^2=2\nu_2^2$.} \label{fig:4D_64_modulation_label} \vspace{-1.5em} \end{figure} Let $\boldsymbol{S}}\newcommand{\bs}{\boldsymbol{s}=[S_1,S_2,S_3]$ denote the Stokes vectors with $S_1= |X|^2-|Y|^2$, $S_2=2\Re\{XY^*\}$, and $S_3=2\Im\{XY^*\}$, and where $X$ and $Y$ are complex numbers representing the constellation symbols in $\text{x}$- and $\text{y}$-polarization, resp. The symbols 4D-64PRS result in 16 distinct states of polarization (SOPs) and {have} a constant modulus ($\|\boldsymbol{S}}\newcommand{\bs}{\boldsymbol{s}\|=1$). This is shown in Fig.~\ref{fig:8D-2048PRS} (ignoring the colors). If the 4D-64PRS format was to be used in two consecutive time slots ($T_1$ and $T_2$), there are $2^{12}=4096$ 8D symbols as a set $\mathcal{X}\in\mathbb{R}^8$, which can be represented by 12 bits $b_1, b_2, \ldots, b_{12}$. In this paper, we are interested in designing formats with a SE of {5.5~bit/4D-sym (11~bit/8D-sym)}, and thus, we will use $b_{12}$ as parity bit to effectively remove $2048$ out of the $4096$ symbols. In order to achieve better performance for optical fiber communication system, we {designed} 8D modulation formats with a better sensitivity and a high nonlinearity tolerance by selecting symbols with larger minimum Euclidean distance and smaller degree of polarization (DOP) in consecutive time slots. The DOP for $i$th transmitted 8D symbol is defined as $p_i=\frac{{||\boldsymbol{S}}\newcommand{\bs}{\boldsymbol{s}_{t_1}+\boldsymbol{S}}\newcommand{\bs}{\boldsymbol{s}_{t_2}||}}{|X_{t_1}|^2+|Y_{t_1}|^2+|X_{t_2}|^2+|Y_{t_2}|^2}$, where $0\leq p_i \leq 1$, $t_1$ and $t_2$ indicate time slot 1 and time slot 2. It is known that the worst symbols for nonlinearity tolerance are polarization identical (PI) symbols with zero DOP ($p=0$), which has identical SOPs. {Therefore, we first avoid all the strongest cross polarization modulation (XPolM)-inducing PI symbols contained in 4D-64PRS and then jointly consider SOP and Euclidean distance to select 2048 polarization nonidentical symbols ($p<1$) in 4D-64PRS constellation set for two SNR regimes: high-SNR and medium-SNR.} We obtained two types of 8D modulation formats with {5.5~bit/4D-sym}. One overhead bit is employed to choose points from the set $\mathcal{X}$ and can be obtained by the following methods: \begin{itemize}[leftmargin=1ex] \item Type 1: $b_{12}$ is a parity bit of single-parity-check code to protecting all information bits, which is an exclusive or (XOR) of all the bits $b_1, b_2, \cdots, b_{11}$. {In this case, the nearest neighboring {symbols} are removed to maximize minimum ED, which perform better at higher SNR.} The parity bit $b_{12}$ can be obtained as $ {\overline{b}_{12}= {b_1\oplus b_2\oplus \cdots\oplus b_{10}\oplus b_{11}},}$ where $\oplus$ and $\overline{\cdot}$ denote the modulo-2 addition and negation, respectively. \item Type 2: $b_{12}$ is used to protect only the least significant bits, which are $b_3$, $b_6$ and $b_9$. {In this case, the modulation will be good for medium SNR. In addition, it has more polarization balanced points in two time slots.} The parity bit $b_{12}$ can be obtained as $\overline{b}_{12}={b_3\oplus b_6\oplus b_9}.$ \end{itemize} Fig.~\ref{fig:8D-2048PRS} shows the relationship of SOPs for transmitted symbols in two consecutive time slots for two designed 8D modulation formats. The color coding scheme used {in} Fig.~\ref{fig:8D-2048PRS} shows the SOP constraint we imposed on the formats. When a blue point is transmitted in the first time slot, only red points {are} used in the second time slot. No PI symbols with $p=0$ are allowed in both of these two 8D modulation formats. \begin{figure}[!tb] \centering \begin{tabular}{c} \scalebox{0.83}{ \begin{subfigure}[8D-2048PRS-T1 Left: time slot 1. Right: time slot 2.]{ \includegraphics[width=0.235\textwidth]{./stokes_8D_M1_t1_new.pdf}\hspace{1.1em} \includegraphics[width=0.235\textwidth]{./stokes_8D_M1_t2_new.pdf} } \end{subfigure} } \\ \scalebox{0.83}{ \begin{subfigure}[8D-2048PRS-T2 Left: time slot 1. Right: time slot 2. ]{ \includegraphics[width=0.25\textwidth]{./stokes_8D_M2_t1.pdf}\hspace{0.5em} \includegraphics[width=0.25\textwidth]{./stokes_8D_M2_t2.pdf} } \end{subfigure} } \end{tabular} \vspace{-0.75em} \caption{Stokes representation of the designed 8D format for two consecutive time slots. When colors are not considered, all four figures correspond to the Stokes representation of 4D-64PRS.} \label{fig:8D-2048PRS} \vspace{-1em} \end{figure} \begin{table}[!tb]\caption{Comparison of different modulation formats with SEs {of} 5.5~and~6.0~[bit/4D-sym].} \label{tab:property} \vspace{-0.5em} \centering \scalebox{0.7} { \begin{tabular}{c|c|c|c|c|c} \hline \hline &SE& $d^2_E$ & $\alpha$ & $\beta$ & Modulus \\ \hline \hline PM-8QAM & 6 & 0.84 & 1 &0.70 &Not Constant \\ \hline 4D-2A8PSK \cite{Kojima2017JLT}& 6 & 0.88 & 1 &0.65 &Constant \\ \hline 4D-64PRS \cite{BinChenJLT2019}& 6 & 0.66 & 1 &0.65 & Constant\\ \hline 8D-2048PRS-T1 & 5.5& 1.15 & 0.96 & 0.64 &Constant\\ \hline 8D-2048PRS-T2 & 5.5& 0.76 & 0.87 & 0.55 &Constant\\ \hline \end{tabular} } \vspace{-1.5em} \end{table} To inform our intuition on design features that influence linear and nonlinear performance, we list the properties of five modulation formats in Table \ref{tab:property} for comparison. The squared minimum Euclidean distance is denoted as $d^{2}_E$. In addition to constant modulus, we propose two performance metrics for evaluating modulation-dependent nonlinear interference: the maximum DOP and the average DOP, which are calculated for all the possible $M$ transmitted symbols in two consecutive time {slots} for a given modulation format. The maximum DOP is defined as $\alpha=\max_{i\in\{1,2,\ldots,M\}} p_i$ and the average DOP is denoted as $\beta=\frac{1}{M}\sum_{i=1}^{M}p_i$. A larger $d^{2}_E$ should result in better {linear} sensitivity, while smaller $\alpha$ and $\beta$ should in principle result in higher nonlinear noise tolerance. Based on these properties, {the} two 8D modulation formats should be better than {the} other {three} modulation formats for both linear and nonlinear regime, which will be shown in Sec.~\ref{Sec:Simulation}. \vspace{-0.5em} \section{Performance Evaluation}\label{Sec:Simulation} Here we compare the performance of {four} different modulation formats: PM-8QAM, 5.5b4D-2A8PSK\footnote{{The constellation 5.5b4D-2A8PSK is generated by using 5b4D-2A8PSK and 6b4D-2A8PSK from \cite{KojimaOFC2017} with optimized ring ratio in a time-domain hybrid way with a 1:1 ratio. }}, and the two proposed 8D-2048PRS formats. {We use the PM-8QAM as baseline to show the nonlinearity-tolerant property of the 8D formats, and choose the 5.5b4D-2A8PSK with the same SE as baseline to show the overall performance improvement.} { The formats were compared via two performance metrics: normalized GMI (NGMI) and effective SNR\footnote{{The effective SNR (denoted by $\text{SNR}_{\text{eff}}$) represents the SNR after fiber propagation and the receiver digital signal processing (DSP) and is defined as \cite[Eq. (16)]{TobiasJLT16}.}}. NGMI is given by NGMI=GMI/$m$, where $m$ is the number of bits per 4D of the format and shows the gains for a BICM system with the same soft-decision forward error correction (SD-FEC) overhead. The effective SNR quantifies the gains due to nonlinearity tolerance. } \vspace{-0.7em} \subsection{Linear Channel Performance}\label{Sec:SimulationLinear} Fig. \ref{fig:NGMI_AWGN} shows the NGMIs for the linear AWGN channel. 8D-2048PRS-T1 and 8D-2048PRS-T2 are shown to clearly outperform both PM-8QAM and 5.5b4D-2A8PSK for all NGMIs above $0.6$~bit. At a NGMI of 0.85 (the state-of-the-art SD-FEC with 25\% overhead) 8D-2048PRS-T1 offers gains of 1.15~dB and 0.25~dB with respect to PM-8QAM and 5.5b4D-2A8PSK, resp. These gains increase up to $1.6$~dB and $0.7$~dB at high SNRs (at NGMI of 0.965~bit). \begin{figure}[!tb] \centering \includegraphics[scale=1]{./AWGN_SNRvsNGMI.pdf} \vspace{-2em} \caption{NGMI vs. SNR for linear AWGN channel. {The black diamond represents the switching point for two 8D formats.}} \label{fig:NGMI_AWGN} \vspace{-1.5em} \end{figure} \vspace{-0.7em} \subsection{Nonlinear Channel Performance}\label{Sec:SimulationNonLinear} We consider a dual-polarization multi-span WDM system with 11 co-propagating channels generated at a symbol rate of 45 GBaud, {a WDM spacing of 50 GHz} and a root-raised cosine (RRC) filter roll-off factor of 0.1. Each WDM channel carries $2^{16}$ 4D symbols in two polarizations at the same launch power per channel $P_{\text{ch}}$. {Each span consists of an 80 km standard single mode fiber (SSMF) through a split-step Fourier solution of the nonlinear Manakov equation with step size 0.1~km and is followed by an erbium-doped fiber amplifier with a noise figure of $5$~dB. We also simulate polarization mode dispersion (PMD) with the coarse-step method \cite{MarcuseJLT1997} and fixed-length sections of length 1~km. As for the statistical characterisation of PMD and its effect on fiber transmission, the polarization is uniformly scattered over Poincar$\acute{\text{e}}$ sphere and differential group delays (DGDs) of each section are selected randomly from a Gaussian distribution with standard deviation equal to $20\%$ of the mean \cite{ProlaPTL1997}.} {At the receiver, an ideal receiver is implemented\footnote{{In this paper, we use ideal 8D phase compensation and 8D detection. However, due to the symmetric property and set-partitioned structure of the proposed modulation family, the 8D formats can be equalized and demapped in 4D or even 2D with marginal loss and lower complexity \cite{BendimeradECOC2018,SjoerdECOC2019}.}} and fiber linear impairments such as the accumulated chromatic dispersion or the polarisation state rotation of the signal are \textit{ideally}\footnote{{{Ideal compensation} refers to having at the receiver exact knowledge of the amount of randomly generated angles and DGD values in the fiber simulation.}} compensated. } \begin{figure}[!tb] \centering \includegraphics[scale=1]{./OpticalChannel_DistancevsSNR.pdf} \vspace{-2em} \caption{{$\text{SNR}_{\text{eff}}$} vs. transmission distance at $P_{\text{ch}}=0$~dBm. Inset: {$\text{SNR}_{\text{eff}}$} vs. launch power per channel $P_{\text{ch}}$ for 8000~km link.} \label{fig:effectiveSNRvsD} \vspace{-1.5em} \end{figure} {First, we consider the propagation without PMD, setting the PMD to zero.} {We} compare the {$\text{SNR}_{\text{eff}}$} as a function of the transmission distance using $P_{\text{ch}}=0$~dBm (optimum $P_{\text{ch}}$ for 100 spans). The results are shown in Fig. \ref{fig:effectiveSNRvsD}. The two proposed 8D formats 8D-2048PRS-T1 and 8D-2048PRS-T2 provide a higher {$\text{SNR}_{\text{eff}}$} than PM-8QAM and 5.5b4D-2A8PSK. Especially, 8D-2048PRS-T2 has higher SNR gains due to its smaller nonlinearty-tolerant property of $\alpha$ and $\beta$ in Table~\ref{tab:property}. From the results above, we can observe that the proposed 8D-2048PRS formats outperform other modulation formats in both linear and nonlinear channel {without PMD}. The total nonlinear shaping gain is linear SNR gain (in Fig. \ref{fig:NGMI_AWGN}) plus {$\text{SNR}_{\text{eff}}$} gain (in Fig. \ref{fig:effectiveSNRvsD}). {In order to characterise the impact of the fiber PMD parameter, we consider the realistic values of the PMD in the range of $0.01-0.2~\text{ps}/\sqrt{\text{km}}$ and average the $\text{SNR}_{\text{eff}}$ over 50 random realizations of PMD for each data point. In Fig. 5, the average $\text{SNR}_{\text{eff}}$ is shown as a function of PMD using 0~dBm launch power per channel over a transmission distance of 8000 km. The $\text{SNR}_{\text{eff}}$ without PMD are shown by the dashed lines as a reference. Fig. 5 shows that the PMD has a small positive impact on the $\text{SNR}_{\text{eff}}$ for all the modulation formats. This confirmed that random PMD depolarizes signals, averages out nonlinear effects, and thus reduces the nonlinear penalty when PMD itself is fully compensated by DSP at receiver. In addition, the average $\text{SNR}_{\text{eff}}$ gain of using 8D formats over PM-8QAM decrease from 0.33~dB to 0.24~dB at high PMD regime because the SOP changes during propagation. The inset of Fig. 5 shows that PM-8QAM has larger $\text{SNR}_{\text{eff}}$ variation and $0.36$~dB lower $\text{SNR}_{\text{eff}}$ in the worst-case scenario w.r.t. 8D2048PRS-T2. } \begin{figure}[!tb] \centering \includegraphics[scale=1]{./OpticalChannel_PMDvsSNR.pdf} \vspace{-1em} \caption{{Average $\text{SNR}_{\text{eff}}$ as a function of the fiber PMD parameter at $P_{\text{ch}}=0$~dBm for transmission over 8000 km. Inset: Histograms of $\text{SNR}_{\text{eff}}$ values obtained for PM-8QAM and 8D-2048PRS-T2 with PMD=0.1~ps/$\sqrt{\text{km}}$.}} \label{fig:NGMIvsPMD} \vspace{-1em} \end{figure} Fig. \ref{fig:GMIvsD} shows the results {without PMD} of the NGMI {as a function} of the transmission distance, using the optimal launch power at each distance. In addition, the recovered 8D-2048PRS-T2 constellation after 20 spans (in Stokes space) is inset. {Note that both proposed constellations yield a 26 spans ($28.6\%$) and 7 spans ($6.7\%$) reach increase relative to PM-8QAM and 5.5b4D-2A8PSK at NGMI of 0.85. } \vspace{-0.5em} \section{Conclusions} We have designed two new nonlinearity-tolerant 8D modulation formats at spectral efficiency of 5.5 bits/4D-sym {and have provided a simple bit-to-symbol mapping by set-partitioning, which shows that these format can be implemented with slight modifications to 4D-64PRS. {Although at a lower SE, the 8D-2048PRS formats outperforms PM-8QAM by significantly improving sensitivity and nonlinearity tolerance.} In comparison to modulation formats of the same spectral efficiency such as 5.5b4D-2A8PSK, $6.7\%$ reach increase is observed. The impact of PMD on the proposed modulations were numerically analyzed to show tolerance to nonlinear effects. We believe that the proposed 8D formats are promising candidates for transmission systems with high nonlinearity, and can be extended to higher dimensions, including wavelengths, and mode/core spatial channels. {Future work will also address a realistic comparison with probabilistic shaping.}} \begin{figure}[!tb] \centering \includegraphics[scale=1]{./OpticalChannel_DistancevsNGMI.pdf} \vspace{-2em} \caption{NGMI versus transmission distance {(without PMD)}. Inset: Stoke space projection of the received symbols for 8D-2048PRS-T2 after 20 spans. } \label{fig:GMIvsD} \vspace{-1em} \end{figure} \small{\noindent \textbf{Acknowledgements:} The authors would like to thank Dr. Gabriele Liga (Eindhoven University of Technology, The Netherlands) for the useful discussions.} \vspace{-1em} \bibliographystyle{IEEEtran}
{ "timestamp": "2019-09-24T02:05:21", "yymm": "1904", "arxiv_id": "1904.06679", "language": "en", "url": "https://arxiv.org/abs/1904.06679" }
\section{Introduction} \label{sec1} The physics of three-dimensional (3D) Weyl semimetals (WSMs) is presently attracting a lot of interest. For several different candidate materials, experiments have recently revealed WSM signatures in various observables \cite{Jia2016,Yan2017,Hasan2017}. Within band theory, WSMs have an even number of touching points (the so-called Weyl nodes) in the Brillouin zone. Near those special points, low-energy quasi-particles have a linear spectrum and represent Weyl fermions \cite{Volovik,Burkov2016,Hosur2013,Burkov2015,Burkov2018,Armitage2018}. The Weyl character of low-energy fermions implies the existence of a chiral anomaly which in turn produces characteristic signatures in experimentally accessible observables such as the magnetoconductivity \cite{Burkov2018}. The remarkable transport features of WSMs may also lead to useful practical applications \cite{Ali,Parameswaran}. We here study the theory of electron-phonon (e-ph) interactions in WSMs. Apart from the case of optical phonons \cite{Song2016,Rinkel2017,Liu2017,Gordon2018,Rinkel,new19}, the exploration of e-ph coupling effects in WSMs has not received much attention by theorists so far. However, it has been pointed out that in the static (frozen phonon) limit, strain engineering can be used to induce pseudo-scalar and pseudo-vector potentials that couple to Weyl fermions \cite{Liu2013,Cortijo2015,Shapourian2015,Pikulin2016,Grushin2016,Moeller2017,Ferreiros2019}. We here focus on low-energy long-wavelength acoustic phonons with linear dispersion, schematically written as $\Omega(\mathbf q)=c_{ph}|\mathbf q|$ with sound velocity $c_{ ph}$. The linear dispersion of phonons as well as Weyl fermions suggests the existence of a scale-invariant effective action that may allow for nontrivial fixed points under the renormalization group (RG). We shall assume below that all relevant phonon momenta are well below the momentum separation $b$ between a time-reversed pair of Weyl nodes, $|\mathbf q|\ll b$, such that phonons cannot scatter electrons between Weyl points at low temperatures. However, at elevated temperatures, $T\agt c_{ph} b/k_B$, this assumption breaks down and additional processes not considered in this work could take place. For insulators or semiconductors, the most important couplings between electrons and acoustic phonons generally originate from either the deformation potential or the piezoelectric interaction \cite{MahanBook,Yu,Giustino}. While the former is a short-range interaction, the latter represents an anisotropic long-range interaction that only exists for inversion-symmetry-breaking crystals. The so-called direct piezoelectric effect refers to the appearance of an electric polarization when a material is subjected to static stress. On the other hand, in a metal, free charge carriers will screen the electric fields produced by local dipole moments, thereby preventing any macroscopic polarization. Nonetheless, it is still possible to speak of piezoelectricity in metals by measuring the bulk electric current in response to a time-dependent strain \cite{Varjas,Vanderbilt}. Electric currents in response to strain have been discussed in the context of WSMs in Ref.~\cite{Cortijo}. Below we will employ piezoelectric coupling expressions derived within the phenomenological theory of electronic insulators \cite{Mahan}. The main assumptions behind this approach are that the electric field produced by phonons is approximately longitudinal, and that there are no free charge carriers responsible for screening. In that case, $\nabla\cdot \mathbf D=0$ can be assumed for the electric displacement field $\mathbf D$. A microscopic derivation of the piezoelectric coupling \cite{Vogl} gives further support to this phenomenological theory. The microscopic approach directly applies to insulators, where one can neglect the frequency dependence of the permittivity at frequencies well below the energy gap. In \emph{undoped} WSMs, the Fermi level is aligned with a Weyl point. Albeit the spectrum is gapless, screening is absent since the density of states vanishes at the Weyl point even when weak disorder is taken into account \cite{Altland2018}. In fact, electron-electron (e-e) interactions are marginally irrelevant in 3D WSMs, such that the dielectric function picks up only logarithmic corrections at low energy scales \cite{Abrikosov,Isobe1,Isobe,Yang,Throckmorton}. However, when computing finite-temperature observables, it may be necessary to include the dynamic screening effects represented by these logarithmic corrections, as we will discuss in Sec.~\ref{sec4c} in more detail. We thus conclude that the piezoelectric coupling in undoped WSMs can be obtained along the lines of Refs.~\cite{Mahan,Giustino,Vogl}, see Eq.~\eqref{piezoqdep} below. If piezoelectric couplings are finite, we find that they dominate over all other types of e-ph couplings which represent RG-irrelevant short-range interactions. Since many WSMs discovered so far belong to polar crystal symmetry classes, e.g., the ditetragonal-pyramidal $4mm$ class for TaAs, piezoelectric couplings are expected to play an important role for a wide class of WSM materials. Our general results will below be illustrated for the concrete case of TaAs, which also represents one of the experimentally most intensely studied WSMs \cite{Xu2015,Lv2015,Yang2015,Lv2015b,Huang2015,Zhang2016,Zhou2016,Arnold2016,Xu2017,Zhang2017}. For related \emph{ab initio} results, see Refs.~\cite{Huang2015b,Buckeridge}. In this paper, we present an analytical theory capturing the generic low-energy physics of undoped 3D WSMs taking into account the piezoelectric e-ph interaction. We also include e-e interactions even though they represent marginally irrelevant perturbations in WSMs. Nonetheless, their interplay with the piezoelectric coupling may lead to an instability in the RG flow \cite{Cardy} which drives the WSM into a Weyl superconductor \cite{Armitage2018,Meng,Cho,Wei,Hosur,Bednik,Li,Gorbar2019} phase. For a similar but different study of e-e and e-ph interactions in the context of two-dimensional (2D) Dirac fermions in graphene layers, see Ref.~\cite{Basko}. The main limitations of our theory come from the neglect of disorder and from the often rather complex band structure of real WSM materials. Moreover, we confine ourselves to \emph{bulk} properties only, leaving studies of surface state properties to future research. The structure of the remainder of this paper is as follows. In Sec.~\ref{sec2}, we explain the model used in our study, derive the piezoelectric coupling Hamiltonian, and introduce a local field theory capturing both e-e and e-ph interactions. We use this field theory to derive the effective interaction potential between two Weyl fermions and show that the phonon-mediated attractive contribution has a characteristic angular anisotropy. In Sec.~\ref{sec4a}, we provide parameter estimates for the example of TaAs. In Sec.~\ref{sec3}, we then derive and discuss the RG equations found from a one-loop analysis. We continue in Sec.~\ref{sec4} by investigating the stability of different superconducting phases by an analytical mean-field analysis. In addition, \added{in Sec.~\ref{sec4c},} we address the temperature and momentum dependence of the quasi-particle decay rate for small piezoelectric couplings where no interaction-induced instabilities are expected. Finally, we offer our conclusions in Sec.~\ref{sec5}. Technical details can be found in the Appendix. We put $\hbar=k_B=1$ throughout. \section{Piezoelectric interactions in Weyl semimetals} \label{sec2} In this section, we describe the model used in this work and derive the piezoelectric coupling between electrons and acoustic phonons in undoped WSMs. We first briefly summarize the electronic Weyl Hamiltonian in Sec.~\ref{sec2a}, and then discuss a general acoustic phonon model in Sec.~\ref{sec2b}. We proceed in Sec.~\ref{sec2c} with a derivation of the piezoelectric coupling Hamiltonian. Next, in Sec.~\ref{sec2d}, we introduce a local field theory approach in order to capture both Coulomb interactions and piezoelectric interactions on equal footing. We also derive the attractive phonon-mediated potential and show that it exhibits a pronounced angular anisotropy. \subsection{Weyl Hamiltonian}\label{sec2a} In the absence of e-e and e-ph interactions, fermionic quasi-particles near a given Weyl node are described by the Weyl Hamiltonian \cite{Volovik,Burkov2016,Hosur2013,Burkov2015,Burkov2018}, \begin{equation}\label{Hplus} H_0 =\sum_{\mathbf p}\psi^\dagger(\mathbf p)\left [v_\perp\mathbf p_\perp\cdot\boldsymbol\sigma_\perp+v_3 p_3 \sigma_3\right]\psi(\mathbf p), \end{equation} where the momentum $\mathbf p=(\mathbf p_\perp,p_3)$ is measured with respect to the Weyl node, $\psi=(\psi_\uparrow,\psi_\downarrow)^t$ is a spinor field operator, and the Pauli matrices $\boldsymbol \sigma_\perp=(\sigma_1,\sigma_2)$ and $\sigma_3$ (with identity $\sigma_0$) act in spin space. In Eq.~\eqref{Hplus} we consider anisotropic Fermi velocities, $v_3\ne v_\perp$. In fact, such anisotropies can be generated by the piezoelectric interaction in crystals with tetragonal symmetry, see Sec.~\ref{sec3a4} below. However, for simplicity, we will often specialize to the isotropic case with \begin{equation}\label{isotropic} v_\perp= v_3 = v. \end{equation} Throughout we assume that the chemical potential is located exactly at the Weyl node. WSMs have an even number $2N$ of Weyl nodes in the Brillouin zone. In particular, time-reversal invariant WSMs with at least four Weyl nodes generically appear as intermediate phases between the trivial and the topological insulator phases of non-centrosymmetric semiconductors, where --- depending on the space group of the crystal --- all $2N$ Weyl nodes could be located at the Fermi level \cite{Murakami,Belopolski2017}. For a continuum model that produces four Weyl nodes by breaking the reflection symmetry of a Dirac semimetal, see Ref.~\cite{Hosur2013}. Below we employ the fermionic Matsubara Green's function (GF) \cite{MahanBook,Altland} for Weyl fermions near a given node, \begin{eqnarray} G_{\sigma\sigma'}(x-x')=-\langle T_\tau \psi^{\phantom\dagger}_\sigma(x)\psi^{\dagger}_{\sigma'}(x')\rangle, \end{eqnarray} where $T_\tau$ denotes imaginary time ($\tau$) ordering, the spin index is $\sigma=\uparrow,\downarrow$, and we use the four-vector notation $x=(\tau,\mathbf r).$ Taking the Fourier transform, with four-momentum $p=(i\omega,\mathbf p)$, the GF has the spin matrix form \begin{equation}\label{GFT} \mathbb G(x)=\frac{1}{\beta V}\sum_{p} e^{-i\omega\tau+i\mathbf p\cdot \mathbf r}\, \mathbb G(p), \end{equation} where $\omega$ denotes fermionic Matsubara frequencies, the volume is $V$, and $\beta=1/T$. Equation~\eqref{Hplus} yields the GF matrix \begin{equation}\label{GF1} \mathbb G(p)=\frac{i\omega\sigma_0+v_\perp\mathbf p_\perp\cdot \boldsymbol\sigma_\perp+v_3p_3\sigma_3}{(i\omega)^2-E^2(\mathbf p)}, \end{equation} which has poles at $i\omega=\pm E(\mathbf p)$ with \begin{equation}\label{energy1} E(\mathbf p)=\sqrt{v_\perp^2\mathbf p_\perp^2+v_3^2p_3^2}. \end{equation} Such a gapless dispersion relation is characteristic for 3D Weyl fermions. For the isotropic case (\ref{isotropic}), this yields the familiar massless Weyl fermion dispersion with $E(\mathbf p)=v|\mathbf p|$. Unless noted otherwise, we consider the thermodynamic limit with $T=0$, where all discrete sums such as those appearing in Eq.~\eqref{GFT} are replaced by integrals. This step also implies that we investigate only bulk physics. It will sometimes be advantageous to work in the band basis where $\mathbb G(p)$ is diagonal. Labeling these bands by $\mu=\pm$ and using Eq.~\eqref{energy1}, we find \begin{equation} \label{bandGF} G_{\mu\mu'}(p)=\frac{\delta_{\mu\mu'}}{i\omega-\mu E(\mathbf p)}\equiv \delta_{\mu\mu'}G_\mu(p). \end{equation} The mode expansion for the fermion field then reads \begin{equation} \label{bandmode} \psi_\sigma(\mathbf r)=\frac1{\sqrt{V}}\sum_{\mathbf p}\mathcal U_{\sigma\mu}(\mathbf p)\psi_{\mathbf p,\mu}e^{i\mathbf p\cdot \mathbf r}, \end{equation} where $\mathcal U(\mathbf p)$ is the unitary matrix that diagonalizes the single-particle Hamiltonian in Eq.~(\ref{Hplus}). Note that $\mathcal U=\mathcal U(\hat {\mathbf p})$ is a function of the angles defined by the unit vector $\hat{\mathbf p}=\mathbf p/|\mathbf p|$ in momentum space. The Fourier transform of the electron density operator, $\rho_e(\mathbf r)=\psi^\dagger\psi$, is then given by \begin{equation}\label{charge1} \rho_e(\mathbf q)=\sum_{\mathbf p,\mu,\mu'}\left[\mathcal U^\dagger(\mathbf p)\mathcal U(\mathbf p+\mathbf q)\right]_{\mu\mu'}\psi^\dagger_{\mathbf p\mu}\psi^{\phantom\dagger}_{\mathbf p+\mathbf q,\mu'}. \end{equation} Allowing for contributions from all $2N$ Weyl nodes in the Brillouin zone (indexed by $h$), we have $\rho_e(\mathbf r)=\sum_h\psi^\dagger_h\psi^{\phantom\dagger}_h$. \subsection{Phonons}\label{sec2b} We here focus on acoustic phonons at long wave lengths. The physics is then described by the lattice displacement field $\mathbf u(\mathbf r)$. With the linearized strain tensor, \begin{equation} \label{strain} u_{jk}=\frac12(\partial_j u_k+\partial_k u_j), \end{equation} and the fourth-order stiffness tensor $C_{ijkl}$, the Euclidean action is given by \cite{MahanBook,Landau} \begin{equation}\label{phononaction} S_{\rm ph}= \int d^4 x \left( \frac{\rho_0}{2} (\partial_\tau {\bf u})^2+ \frac12\sum_{ijkl} C_{ijkl} u_{ij} u_{kl}\right), \end{equation} where $\rho_0$ is the mass density and $d^4 x=d\tau d^3{\bf r}$. Our main interest in this work is in describing possible electronic instabilities of WSMs due to piezoelectric interactions, and we will therefore not study a specific phonon model. We assume instead that all three ($J=1,2,3)$ acoustic phonon modes have a linear dispersion, \begin{equation}\label{phonondisp} \Omega_J(\mathbf q)= c_J(\hat{\mathbf q}) \, |\mathbf q|, \end{equation} where the respective sound velocity, $c_J(\hat{\mathbf q})$, could depend on the angular direction $\hat{\mathbf q}={\bf q}/|{\bf q}|$. Using bosonic annihilation operators, $a^{}_J(\mathbf q)$, the standard mode expansion of the lattice displacement field is given by \cite{MahanBook} \begin{equation} \mathbf u(\mathbf r)= \sum_{J=1}^3\sum_{\mathbf q} \frac {\boldsymbol\epsilon^J(\mathbf q)e^{i\mathbf q\cdot \mathbf r} }{\sqrt{2\rho_0 V\Omega_J(\mathbf q)}} a^{\phantom\dagger}_J(\mathbf q) + {\rm h.c.}, \end{equation} where the $\boldsymbol \epsilon^J(\mathbf q)$ are polarization unit vectors. Next we define the phonon propagator \cite{Altland}, \begin{equation} D_{jk}(x-x')=-\langle T_\tau u_j(x)u_{k}(x')\rangle. \label{Djjphonon} \end{equation} Taking the Fourier transform and using $q=(i\omega,\mathbf q)$ with bosonic Matsubara frequencies $\omega$, we obtain from Eqs.~\eqref{phonondisp} and \eqref{Djjphonon} the result \begin{equation} D_{jk}(q)=\frac1{\rho_0}\sum_{J}\frac{\epsilon^J_j(\mathbf q)\epsilon^J_{k}(-\mathbf q)}{(i\omega)^2-\Omega^2_J(\mathbf q)} = D_{kj}(-q). \label{phonD} \end{equation} For an isotropic continuum, we may identify $J=1$ with the longitudinal mode and $J=2,3$ with the transverse modes, where $c_1=c_l$ and $c_{2,3}=c_t$ denote the longitudinal and transverse sound velocities, respectively. We will often make the simplifying assumption \begin{equation}\label{isophon} c_t = c_l = c_{ph},\quad c_{ph}\ll v, \end{equation} on top of the isotropic Fermi velocity condition \eqref{isotropic}. These assumptions do not affect scaling properties in an essential way. Moreover, relaxing those approximations does not pose conceptual problems and could allow one to take into account \emph{ab initio} results, see, e.g., Refs.~\cite{Buckeridge,Chang2016}. \subsection{Piezoelectric interaction}\label{sec2c} A microscopic derivation of the e-ph interaction in insulators encounters short-range as well as long-range interactions \cite{Vogl,Giustino}. The long-range contributions can be organized in terms of a multipole expansion of the electron-ion interaction potential. The first term in this expansion is a dipolar contribution which must vanish due to the acoustic sum rule. The next terms are quadrupolar contributions which account for piezoelectric couplings and vanish for centrosymmetric materials, but not when inversion symmetry is broken. A phenomenological derivation \cite{Mahan,Yu} starts from the constitutive relation for the electric displacement, \begin{equation} D_i=\sum_{jk}e_{ijk}u_{jk}+ \sum_j \varepsilon_{ij}E_j,\label{constitutive} \end{equation} where $\mathbf E$ is the external electric field, $e_{ijk}$ the piezoelectric tensor, and $\varepsilon_{ij}$ the permittivity tensor \cite{MahanBook}. A non-vanishing piezoelectric tensor arises if strain can induce $\mathbf D\ne 0$ even for $\mathbf E=0$. The relation $e_{ijk}=(\partial D_i/\partial u_{jk})_{E}$ and the symmetry of the strain tensor, $u_{jk}=u_{kj}$, imply that the piezoelectric tensor is symmetric in the last two indices, $e_{ijk}=e_{ikj}$. In the absence of free charges, from Eq.~\eqref{constitutive} we have \begin{equation} \nabla\cdot \mathbf D=0=\sum_{ijk}e_{ijk}\partial_iu_{jk}+\sum_{ij}\varepsilon_{ij}\partial_iE_j. \end{equation} Taking the Fourier transform gives \begin{equation} \sum_{ij}\varepsilon_{ij}q_iE_j(\mathbf q)=-i\sum_{ijk}e_{ijk}q_iq_ju_{k}(\mathbf q). \end{equation} Since the electric field is effectively longitudinal \cite{Mahan}, we can write $\mathbf E(\mathbf q)\simeq -i\mathbf q\Phi(\mathbf q)$ with the scalar potential \begin{equation}\label{scalarpot} \Phi(\mathbf q)=\frac{1}{\varepsilon \mathbf q^2} \sum_{ijk}e_{ijk}q_iq_ju_{k}(\mathbf q). \end{equation} For notational simplicity, we assume an isotropic permittivity tensor, $\varepsilon_{ij}=\varepsilon\delta_{ij}$. The scalar potential \eqref{scalarpot} now couples to the electronic charge density, cf.~Eq.~\eqref{charge1}, resulting in the piezoelectric interaction Hamiltonian \begin{equation}\label{piezoqdep} H_{\textrm{pz}}=\frac{e}{\varepsilon V}\sum_{ijk}\sum_{\mathbf q\ne 0}e_{ijk}\frac{ q_iq_j}{ \mathbf q^2} u_k(\mathbf q)\rho_e(-\mathbf q). \end{equation} We emphasize that the coupling strength in Eq.~\eqref{piezoqdep} depends on the direction of the unit vector $\hat{\mathbf q}$, where the $\mathbf q=0$ mode is omitted to ensure overall electric neutrality. From dimensional analysis, $H_{\rm pz}$ is marginal under RG transformations, and second-order perturbation theory implies a linear-in-$T$ dependence of the quasi-particle decay rate, see Sec.~\ref{sec4c} for details. At low $T$, the piezoelectric interaction will therefore dominate over RG-irrelevant short-range contributions, e.g., from the deformation potential. We find that the latter terms generically cause a quasi-particle decay rate scaling as $\sim T^3$. In fact, for insulators and semiconductors, the piezoelectric interaction is known to dominate small-$q$ scattering if it is allowed by crystal symmetries \cite{Yu}. We emphasize that the piezoelectric interaction is marginal only in three spatial dimensions. In 2D systems, the corresponding operator is relevant instead. In practice, such interactions are then screened above a length scale defined by the bare coupling constant. Finally, in view of the symmetry property $e_{ijk}=e_{ikj}$, it is customary to express the piezoelectric tensor in Voigt notation \cite{Mahan}, \begin{equation} \label{voigt} e_{ijk}=e_{i(jk)}\mapsto e_{im}, \quad m=1,\dots,6, \end{equation} where matrix elements with $(11)\mapsto 1$, $(22)\mapsto 2$, and $(33)\mapsto 3$ correspond to tension or compression, and those with $(23)=(32)\mapsto 4$, $(13)=(31)\mapsto 5$ and $(12)=(21)\mapsto 6$ describe shear. Depending on the crystal symmetry, the various components in Eq.~\eqref{voigt} may be related to one another or they could vanish identically, see Ref.~\cite{Nelson} for useful tables. For instance, for TaAs with space group $I4_1md$, No.~109, one finds only three independent components, namely $e_{15}$, $e_{31}$ and $e_{33}$. Their respective values have been computed by \emph{ab initio} methods \cite{Buckeridge}. \subsection{Electron-electron interactions}\label{sec2d} As we show below, the piezoelectric interaction \eqref{piezoqdep} generates a long-range e-e interaction that is attractive in the low-frequency limit where retardation effects can be neglected. This phonon-mediated potential has a characteristic angular anisotropy and competes with the repulsive Coulomb interaction in undoped WSMs. We therefore also include Coulomb interactions from now on. To that end, we express the Euclidean action of the system in local form by introducing a scalar bosonic Hubbard-Stratonovich field $\varphi(x)$, see Refs.~\cite{Yang,Throckmorton}. Loosely speaking, the field $\varphi$ describes photon modes mediating Coulomb interactions. It couples to the sources of the electric field, which include both the conduction electron density and the effective charge density generated by strain via the piezoelectric effect. With the phonon action $S_{\rm ph}$ in Eq.~\eqref{phononaction}, we start from the total action \begin{eqnarray} \label{stotal} S&=&S_{\rm ph}+\int d^4x\Bigl[Z^{-1}_\psi \psi^\ast\partial_\tau \psi -iv \psi^\ast ({\bm\nabla}\cdot\boldsymbol\sigma) \psi \\ \nonumber &+& \frac{ Z^{-1}_\varphi}{2} ({\bm \nabla}\varphi)^2 + ig_{ e} \psi^\ast\psi\, \varphi + ig_{ph} \sum_{jkl} e_{jkl} \partial_j \varphi\, u_{kl} \Bigr] . \end{eqnarray} The bare weight of the fermion (Coulomb) field is given by $Z_\psi=1$ ($Z_\varphi=1$). These factors could, however, change during the RG flow, see Sec.~\ref{sec3}. The partition function is thereby expressed as a functional integral over the fermionic Grassmann fields $(\psi,\psi^*)$, the displacement field $\bf u$, and the field $\varphi$, i.e., ${\cal Z} = \int {\cal D}[\psi,\psi^\ast, {\bf u}, \varphi] e^{-S}$ \cite{Altland}. For simplicity, we here assumed isotropic Fermi velocities, cf.~Eq.~\eqref{isotropic}, but we also discuss the general case in Sec.~\ref{sec3}. The action \eqref{stotal} contains two interaction vertices with couplings $g_{e}$ and $g_{ph}$. Their diagrammatic representation is shown in Fig.~\ref{fig1}. In order to verify that Eq.~\eqref{stotal} makes sense, let us now integrate out the bosonic field $\varphi$. With $\rho_e=\psi^\ast \psi$ and switching to Fourier space ($d^4 q=d\omega d^3 \mathbf q$), the interacting part of the action is then given by \begin{eqnarray} \label{sint} S_{\rm int}&=& \int \frac{d^4 q}{(2\pi)^4} \Biggl( \frac{g_e^2}{2|\mathbf q|^2} \rho_e(q) \rho_e(-q) + \\ \nonumber &&\qquad +\, g_eg_{ph} \sum_{ijk} e_{ijk} \frac{q_iq_j}{|\mathbf q|^2} u_k(q) \rho_e(-q) + \\ &+& \nonumber \frac{g_{ph}^2}{2} \sum_{ijk}\sum_{lmn} e_{ijk}e_{lmn} \frac{q_{i}q_{j}q_{l}q_{m}}{|\mathbf q|^2} u_k(q) u_n(-q) \Biggr). \end{eqnarray} The first term corresponds to the Coulomb e-e interaction upon choosing $g_e^2=e^2/\varepsilon$, while the second term reproduces the piezoelectric interaction \eqref{piezoqdep} for $g_e g_{ph}=e/\varepsilon$. The bare couplings are therefore given by \begin{equation}\label{gedef} g_e = \frac{e}{\sqrt{\varepsilon}},\qquad g_{ph}= \frac{1}{\sqrt{\varepsilon}}. \end{equation} We emphasize that the charge $e$ is associated only with the Coulomb vertex $\sim g_e$ in Fig.~\ref{fig1}. In Eqs.~\eqref{stotal} and \eqref{sint}, we have tacitly assumed that intra- and inter-node Coulomb interactions can be taken identical. Since the effects considered in our paper come from the long-range $1/r$ tail of the Coulomb potential, the couplings between long-wavelength density fluctuations $\rho_h$ and $\rho_{h'}$ of electrons near the Weyl nodes $h$ and $h'$, respectively, are approximately described by the same potential. The last term in Eq.~\eqref{sint} describes the energy density associated with strain-induced electric fields. Being quadratic in the strain tensor, this contribution generates the so-called piezoelectric stiffening correction, see Ref.~\cite{Nelson} for details. This modification of the phonon dispersion typically acts to increase sound velocities \cite{Nelson,Rinkel}. Since here our main interest is centered on electronic instabilities, we will simply assume that the phonon velocities $c_J(\hat {\mathbf q})$ in Eq.~\eqref{phonondisp} already incorporate piezoelectric stiffening to all orders in $g_{ph}$. \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{f1.pdf} \par\end{centering} \caption{\label{fig1} Feynman diagrams for the vertices in Eq.~\eqref{stotal}, coupling the field $\varphi$ (wiggly curve) to (a) electrons (solid line) and to (b) phonons (dashed). The Coulomb (piezoelectric) vertex $\sim g_e$ ($\sim g_{ph}$) is shown as filled (open) circle. } \end{figure} Next we discuss the effective interaction potential between two Weyl fermions described by the above theory. The two diagrams determining the effective e-e interaction at tree level, i.e., to lowest nontrivial order in perturbation theory, are illustrated in Fig.~\ref{fig2}. In particular, Fig.~\ref{fig2}(b) defines a retarded e-e interaction potential, $V_{\rm ph}(q)$, mediated by the piezoelectric interaction, where $q=(i\omega,\mathbf q)$ is the exchanged four-momentum. Using Eq.~\eqref{stotal} and the phonon propagator in Eq.~\eqref{phonD}, we find \begin{equation} \label{phon1} V_{\textrm{ph}}(q)=\sum_J\frac{g_e^2 g_{ph}^2/\rho_0}{(i\omega)^2-\Omega^2_J(\mathbf q)} \sum_{ijk}\frac{\left|e_{ijk} q_iq_j\epsilon^J_k(\hat {\mathbf q})\right|^2}{|\mathbf q|^4}. \end{equation} Neglecting retardation effects by going to the static limit, $\omega \to 0$, the potential can be written as \begin{equation}\label{phononpotential} V_{\textrm{ph}}(\mathbf q)=-\frac{g_e^2 g_{ph}^2}{\rho_0 \mathbf q^2}\gamma(\hat{\mathbf q}), \quad\gamma(\hat {\mathbf q})=\sum_{J=1}^3 \gamma^{}_J(\hat{\mathbf q}) ,\end{equation} with the anisotropy functions \begin{equation}\label{functionGJ} \gamma^{}_J(\hat{\mathbf q})= \frac{1}{c^2_J(\hat{\mathbf q}) |{\mathbf q}|^4}\Biggl|\sum_{ijk} e_{ijk} q_i q_j \epsilon^J_k(\hat{\mathbf q})\Biggr|^2, \end{equation} which describe the angular dependence of the phonon-mediated interaction. We emphasize that $\gamma(\hat{\mathbf q})>0$ for all directions $\hat{\mathbf q}$, and thus the interactions in Eq.~\eqref{phononpotential} are always attractive. Combining Eq.~\eqref{phononpotential} with the long-range Coulomb interaction in Fig.~\ref{fig2}(a), we arrive at the total e-e interaction potential \begin{equation}\label{vtot} V_{\textrm{tot}}(\mathbf q)=\frac{g_e^2}{ \mathbf q^2}\left(1-\frac{g_{ph}^2}{\rho_0}\gamma(\hat{\mathbf q})\right). \end{equation} \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{f2.pdf} \par\end{centering} \caption{\label{fig2} Effective e-e interaction at tree level. (a) Repulsive Coulomb interaction. (b) Phonon-mediated e-e interaction, see Eq.~\eqref{phon1}. } \end{figure} Let us now consider WSMs in the $4mm$ crystal class, which in particular includes TaAs, and also use the simplifications in Eqs.~\eqref{isotropic} and Eq.~\eqref{isophon}. In Voigt notation, see Sec.~\ref{sec2c}, we define the ratios of piezoelectric coefficients \begin{equation} \label{AB} A=\frac{e_{15}}{e_{33}} ,\qquad B=\frac{e_{31}}{e_{33}} . \end{equation} The anisotropy function $\gamma=\gamma(\theta)$ now depends only on the polar angle $\theta$ of $\hat{\mathbf q}$. To evaluate Eq.~\eqref{functionGJ}, the polarization unit vectors are parametrized as \begin{equation} \boldsymbol\epsilon^1(\hat{\mathbf q})=i\hat {\mathbf q}, \quad \boldsymbol\epsilon^2(\hat{\mathbf q})=\frac{i\hat{\mathbf z}\times \hat {\mathbf q}}{|\hat{\mathbf z}\times \hat {\mathbf q}|}, \quad \boldsymbol\epsilon^3(\hat{\mathbf q})=\hat {\mathbf q}\times \boldsymbol\epsilon^2(\hat{\mathbf q}), \end{equation} leading to \begin{eqnarray} \nonumber \gamma_1(\theta)&=&\frac{ e_{33}^2}{c_{ph}^2}\cos^2\theta\left[1+(2 A +B-1) \sin^2\theta \right ]^2,\\ \gamma_2(\theta)&=&0, \label{ff2}\\ \nonumber \gamma_3(\theta)&=&\frac{ e_{33}^2}{c_{ph}^2} \sin^2\theta\left[\left(B-1\right) \cos^2 \theta +A \cos (2 \theta )\right]^2. \end{eqnarray} The contribution from the $J=2$ transverse mode, where the polarization is always perpendicular to $\hat{\mathbf z}$, vanishes identically. More generally, $\gamma_J(\theta)=0$ whenever $\boldsymbol \epsilon^J\cdot \hat{\mathbf z}=0$. In the simplest approximation, one may just average over the directions $\hat{\mathbf q}$ in Eq.~\eqref{phononpotential}, see Refs.~\cite{MahanBook,Mahan}. We write the angular-averaged total interaction potential as \begin{equation}\label{vtotav} \bar V_{\textrm{tot}}(\mathbf q)=\frac{g_e^2(1-\bar \gamma)}{\mathbf q^2}. \end{equation} For the 4$mm$ crystal class, we find from Eq.~\eqref{ff2} \begin{equation}\label{bargamma} \bar\gamma=\frac{g_{ph}^2}{2\rho_0} \int_0^\pi d\theta \sin(\theta)\gamma(\theta) = \frac{w_\gamma}{\rho_0} \left(\frac{g_{ph} e_{33}}{c_{ph}}\right)^2, \end{equation} with the coefficient \begin{equation} \label{CPi} w_\gamma=\frac{1}{15}\left[10 A^2+4 A (B+1)+2 B^2+3\right]. \end{equation} Clearly, for $\bar \gamma>1$, the averaged total interaction \eqref{vtotav} is attractive. One thus expects a gapped superconducting phase with $s$-wave singlet pairing. However, as we show in Sec.~\ref{sec4}, for $\bar\gamma<1$, one may also encounter more exotic superconducting phases exhibiting, e.g., nodal-line triplet pairing. \subsection{Parameter estimates}\label{sec4a} \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{f3.pdf} \par\end{centering} \caption{\label{fig3} (a) Polar plot of the anisotropy function $\gamma(\theta)$ in Eq.~\eqref{phononpotential} for the case of TaAs, with $\gamma(\theta)$ multiplied by $g_{ph}^2/\rho_0$. We take $\varepsilon=20\varepsilon_0$, where $\varepsilon_0$ is the free-space permittivity. The piezoelectric tensor values are taken from Ref.~\cite{Buckeridge}, where we get $\bar\gamma\simeq 0.20$ in Eq.~\eqref{bargamma}. The blue color indicates that the phonon-mediated interaction is always attractive. (b) Effective anisotropy function of the total e-e interaction potential in Eq.~\eqref{vtot}, where we adjust $e_{33}$ such that $\bar\gamma=0.97$. Blue again indicates attraction while orange represents repulsion. } \end{figure} To get concrete predictions from our theory, we need information about the piezoelectric coefficients \cite{Xue,Berlincourt,Acosta}, the permittivity $\varepsilon$, the mass density $\rho_0$, and the Fermi as well as the sound velocities. Since in TaAs the lattice parameters are $a_\perp\simeq 3.43${\AA} and $a_3\simeq 11.6${\AA}, and the conventional unit cell contains 4 Ta and 4 As ions, the mass density is $\rho_0\simeq 1.24 \times 10^4$ kg/m$^3$. For simplicity, we here adopt the simplifying assumptions in Eqs.~\eqref{isotropic} and \eqref{isophon}. For the Fermi velocity, we take $\hbar v\simeq 2$~eV{\AA} \cite{Huang}, which corresponds to $v\simeq 3 \times 10^5$ m/s. The sound velocity is assumed to be given by $c_{\rm ph}\simeq 6\times 10^3$~m/s, cf.~the value quoted in Ref.~\cite{Guo} for TaN. For the piezoelectric tensor of TaAs \cite{Buckeridge}, we use $e_{33}=-1.89 $ Cm$^{-2}$ and the ratios in Eq.~\eqref{AB} are $A \simeq -2.62$ and $ B\simeq -0.43$. This gives $w_\gamma\simeq4.40$. Using the rough estimate $\varepsilon\approx 20 \varepsilon_0$ \cite{Throckmorton}, we obtain $\alpha_{\rm eff}\approx 0.24$ and $\bar \gamma\approx 0.20$. The latter is well below the critical value $\bar \gamma=1$. However, the value of $\bar\gamma$ could in principle be higher in other materials which might have, for instance, larger piezoelectric coefficients or smaller permittivity. Moreover, the approximation in Eq. \eqref{vtotav} neglects the angular anisotropy of the effective interaction. A polar plot of $\gamma(\theta)$ based on our estimates for TaAs is shown in Fig.~\ref{fig3}(a). The attractive interaction strength is maximal for $\theta=\pi/2$. This shape of $\gamma(\theta)$ is representative of the regime $|e_{15}|>|e_{33}|>|e_{31}|$, which is also realized for the paradigmatic piezoelectric insulator BaTiO$_3$ \cite{Xue}. For TaAs, the total e-e interaction potential is repulsive in all directions. However, for higher values of $\bar\gamma$ and depending on the relative strength of the Coulomb and the piezoelectric terms, there may be directions along which the total interaction potential becomes attractive even for $\bar\gamma<1$. In this case, superconducting phases could be possible despite the effective repulsion in the $s$-wave channel. In Fig.~\ref{fig3}(b), we show the angular dependence of the total e-e interaction potential \eqref{vtot} for $\bar \gamma=0.97$. In this case, the total e-e interaction potential changes sign as a function of $\theta$ and becomes attractive for $\theta\simeq \pi/2$. \section{RG analysis} \label{sec3} In this section, we turn to the derivation and solution of the one-loop RG equations. In an infinitesimal RG step, the flow parameter changes as $\ell\mapsto \ell+d\ell$, where $\Lambda(\ell)=e^{-\ell}\Lambda_0$ is the running high-energy bandwidth cutoff with bare value $\Lambda_0$. We obtain the RG equations by the standard momentum-shell integration approach, where in each RG step one integrates over all field modes appearing in the partition function with energies in the shell $\Lambda(\ell+d\ell)< E<\Lambda(\ell)$. The resulting contributions to the partition function are then taken into account by renormalization of the various couplings in the action, see Refs.~\cite{Cardy,Altland}. \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{f4.pdf} \par\end{centering} \caption{\label{fig4} Schematic form of the possible amplitudes generated by the local field theory in Eq.~\eqref{stotal}, where shaded regions represent dressed vertices in a perturbative expansion. } \end{figure} We start from the observation that for the local field theory \eqref{stotal}, perturbative expansions of physical observables involve only diagrams of the types shown in Fig.~\ref{fig4}. In all these diagrams, fermion loop contributions always involve the Coulomb vertex $\sim g_e$. This fact can be rationalized by recalling that the piezoelectric interaction also arises from an expansion of the Coulomb potential, see Sec.~\ref{sec2c}. The vertex $g_{ph}$ only appears in regular, perturbative corrections to the Coulomb propagator. At the one-loop level, perturbation theory in $g_e$ generates the diagrams in Figs.~\ref{fig5}(a), \ref{fig5}(b) and \ref{fig5}(c), which are precisely the diagrams that govern the one-loop renormalization of e-e interactions in the absence of phonons \cite{Throckmorton}. Within the static approximation with the angular-averaged interaction potential in Eq. \eqref{vtotav}, the piezoelectric interaction is combined with the Coulomb e-e interaction and its effect amounts to replacing $g_e^2\mapsto g_e^2(1-\bar\gamma)$. As a consequence, the essential physics of the system can be studied in terms of a single dimensionless coupling, namely the effective fine structure constant \begin{equation}\label{finestructure} \alpha_{\textrm{eff}}=\frac{g_e^2(1-\bar \gamma)}{4\pi v}. \end{equation} Within this static approximation, the RG equation for $\alpha_{\textrm{eff}}$ at the one-loop level follows from the diagrams in Figs.~\ref{fig5}(a), \ref{fig5}(b) and \ref{fig5}(c). The result is \cite{Throckmorton} \begin{equation}\label{aflow} \frac{d\alpha_{\textrm{eff}}}{d\ell}=-\frac{2(N+1)}{3\pi}\alpha_{\textrm{eff}}^2. \end{equation} Therefore, the system flows to strong coupling when the effective fine structure constant becomes negative. This happens for sufficiently strong piezoelectric coupling, in the regime $\bar\gamma>1$. The strong-coupling phase realized for $\bar \gamma>1$ is expected to be an intrinsic superconductor since the attractive e-ph interaction then dominates over the repulsive Coulomb interaction. Previous work \cite{Cho,Hosur2013,Bednik} has discussed intrinsic superconductivity in \emph{doped} WSMs. The new element in our system is the long-range e-e interaction resulting from a combination of unscreened Coulomb and piezoelectric interactions. We recall that the standard BCS formula for the superconducting gap is given by $\Delta\sim e^{-1/\nu_F |\lambda|}$, where $\nu_F$ is the normal density of states at the Fermi level and $\lambda$ denotes the strength of the short-range attractive interaction. For vanishing $\nu_F$, intrinsic superconductivity is not possible unless the short-range interaction exceeds a critical coupling of the order of the electronic bandwidth, far beyond the perturbatively accessible regime. As we will see in Sec.~\ref{sec4}, the long-range character of the piezoelectric interaction allows for the opening of a finite gap even for the undoped case with $\nu_F=0$. In this case, the gap is a function of the dimensionless parameter $\alpha_{\textrm{eff}}<0$. Eliminating the need for doping to realize superconductivity in WSMs is important because the density of states cannot be made very large if one wants to stay below the energy scale $vb$, where $b$ is the momentum separation between two Weyl nodes. In fact, at high energies, nonlinearities will appear in the dispersion relation. \subsection{RG equations beyond the static approximation}\label{sec3a} We can use the RG approach to analyze how the piezoelectric interaction affects the running couplings in the effective action \eqref{stotal} beyond the static approximation, i.e., including retardation effects. After performing an infinitesimal RG transformation and rescaling $\psi\mapsto (1+\delta Z_\psi/Z_\psi)^{1/2}\psi$ and $\varphi\mapsto (1+\delta Z_\varphi/Z_\varphi)^{1/2}\varphi$ to absorb the field renormalizations, we obtain a correction to the effective action of the form\begin{eqnarray} \delta S&=&\int d^4x\Bigl[ -iv \left(1+\frac{\delta v}{v}+\frac{\delta Z_\psi}{Z_\psi}\right)\psi^\ast ({\bm\nabla}\cdot\boldsymbol\sigma) \psi \nonumber\\ && + ig_{ e}\left(1+\frac{\delta g_e}{g_e}+\frac{\delta Z_\psi}{Z_\psi}+\frac12\frac{\delta Z_\varphi}{Z_\varphi}\right) \psi^\ast\psi\, \varphi \\ &&+ ig_{ph} \left(1+\frac{\delta g_{ph}}{g_{ph}}+\frac12\frac{\delta Z_\varphi}{Z_\varphi}\right) \sum_{jkl} e_{jkl} \partial_j \varphi\, u_{kl} \Bigr] .\nonumber \end{eqnarray} We can compute $\delta g_e$ and $\delta g_{ph}$ from the corresponding vertex corrections, whereas $\delta Z_\psi$ and $\delta v$ stem from the electron self-energy and $\delta Z_\varphi$ from the polarization insertion in the Coulomb propagator. The corrections can then be absorbed as a renormalization of the parameters $v$, $g_e$ and $g_{ph}$. At the one-loop level and at lowest order in $g_{ph}$, the contributions from the piezoelectric interaction are represented by the diagrams shown in Figs.~\ref{fig5}(d) and \ref{fig5}(e). The latter are generated by taking into account the (non-divergent) correction to the Coulomb propagator at order $g_{ph}^2$. In the following we will now separately discuss each of the five diagrams in Fig.~\ref{fig5}. \begin{figure} \begin{centering} \includegraphics[width=.9\columnwidth]{f5.pdf} \par\end{centering} \caption{\label{fig5} Diagrams contributing to the one-loop RG equations. (a) Coulomb correction to the electronic self-energy. (b) Vertex correction due to Coulomb interaction. (c) Polarization bubble inserted in the Coulomb propagator. (d) Piezoelectric correction to the electronic self-energy. (e) Piezoelectric vertex correction.} \end{figure} \subsubsection{Coulomb correction to the electronic self-energy}\label{sec3a1} The standard rainbow diagram in Fig.~\ref{fig5}(a) describes the lowest-order correction to the electronic self-energy due to Coulomb interactions. A well-known consequence of this contribution is a renormalization of the Fermi velocities. Related effects have been predicted and experimentally observed for graphene \cite{Elias}. The diagram in Fig.~\ref{fig5}(a) yields the self-energy \begin{equation}\label{selfenergy1} \Sigma_{ee}(p)=-g_e^2\int\frac{d^4q}{(2\pi)^4}\frac{1}{ \mathbf q^2}\mathbb G( p+q), \end{equation} with $p=(i\omega,\mathbf p)$. We evaluate Eq.~\eqref{selfenergy1} in App.~\ref{app1}, where we show that $\Sigma_{ee}$ does not depend on the frequency $\omega$ and hence no field renormalization arises from this term, $\delta Z_\psi=0$. Integrating out the modes of the field $\varphi$ within the high-energy momentum shell and keeping only self-energy terms linear in the momentum $\mathbf p$, we arrive at the self-energy correction \begin{equation}\label{selfenergy2} \delta\Sigma_{ee}(\mathbf p)= \frac{g_e^2}{8\pi^2} \left(\eta_\perp \mathbf p_\perp\cdot \boldsymbol\sigma_\perp+\eta_3 p_3\sigma_3 \right)d\ell, \end{equation} where the numbers $\eta_\perp$ and $\eta_3$ depend on the Fermi velocity ratio $v_3/v_\perp$, cf.~App.~\ref{app1}. By comparing with Eq.~\eqref{Hplus}, we see that Eq.~\eqref{selfenergy2} generates a correction to the Fermi velocities $v_\perp$ and $v_3$. For the isotropic case \eqref{isotropic}, we get $\eta_\perp=\eta_3=4/3$. In this case, we obtain \begin{equation}\label{vrenormCoulomb} \delta v = \frac{g_e^2}{6\pi^2}d\ell . \end{equation} By itself, this term makes the Fermi velocity increase under the RG flow. \subsubsection{Vertex correction due to Coulomb interaction} \label{sec3a2} Next we turn to the diagram in Fig.~\ref{fig5}(b), which provides a vertex correction due to the Coulomb interaction, corresponding to a charge renormalization \cite{Altland}. However, this diagram actually gives no contribution at all. In fact, the instantaneous Coulomb interaction does not give rise to charge renormalization for Weyl (or Dirac) fermions at the one-loop level \cite{Kotov}. For the corresponding 2D graphene case, charge renormalization is absent also at the two-loop level \cite{Kotov}. \subsubsection{Coulomb propagator: Polarization bubble}\label{sec3a3} At the one-loop level, the self-energy of the field $\varphi$ comes from the standard polarization bubble in Fig.~\ref{fig5}(c). Following the analysis of Ref.~\cite{Throckmorton}, the self-energy correction can be absorbed by the field renormalization of $\varphi$, \begin{equation} \label{fieldren} \delta Z_\varphi=-\frac{N g_e^2 }{6\pi^2 v} Z_\varphi d\ell , \end{equation} where the presence of a fermion loop in the diagram implies that this correction is proportional to the number of Weyl nodes, $2N$. For simplicity, we have again assumed isotropic Fermi velocities, see Eq.~\eqref{isotropic}. \subsubsection{Piezoelectric self-energy correction}\label{sec3a4} Next we turn to the electronic self-energy $\Sigma_{ep}(i\omega,\mathbf p)$ due to e-ph interactions, which to one-loop order comes from the diagram in Fig.~\ref{fig5}(d). We evaluate this term in App.~\ref{app2}, see Eq.~\eqref{B4}. A non-universal contribution arises for $\omega=\mathbf p=0$ which can be absorbed by renormalization of the chemical potential. A similar contribution also comes from e-e interactions, see App.~\ref{app1}, and we eventually require the renormalized chemical potential to be located at the Weyl node. As discussed in App.~\ref{app2}, for $4mm$ crystal symmetry and again using Eqs.~\eqref{isotropic} and \eqref{isophon}, the self-energy correction after momentum-shell integration is given by \begin{eqnarray}\label{selfenergy3} \delta\Sigma_{ep}(p)&=&-\frac{1}{4\pi \rho_0}\left(\frac{g_eg_{ph} e_{33}}{c_{ph}}\right)^2 \frac{c_{ph}}{v}\\ \nonumber &\times& \left(\frac{i\omega C_0}{v} \sigma_0 + C_\perp\mathbf p_\perp\cdot\boldsymbol\sigma_\perp+C_3p_3\sigma_3 \right) d\ell, \end{eqnarray} with the numbers $C_0\simeq 1.40$, $C_\perp\simeq 0.29$ and $C_3\simeq 0.83$ for TaAs. The smallness of the factor $c_{ph}/v\ll 1$, together with the fact that in practice we have $\bar\gamma\lesssim 1$ in Eq.~\eqref{bargamma}, implies that contributions from Eq.~\eqref{selfenergy3} to RG equations are rather small. In marked contrast to the Coulomb case, we now encounter in Eq.~\eqref{selfenergy3} a term $\Sigma_{ep}\sim \omega$ responsible for field renormalization, \begin{equation}\label{fieldren1} \delta Z_\psi =-\frac{C_0}{4\pi \rho_0 v} \frac{c_{ph}}{v} \left(\frac{g_e g_{ph} e_{33}}{c_{ph} }\right)^2 Z_\psi d\ell , \end{equation} implying that the quasi-particle weight $Z_\psi$ decreases under the RG flow. The $\mathbf p\ne 0$ terms in Eq.~\eqref{selfenergy3} can be absorbed by renormalization of the Fermi velocities. In general, even for initially isotropic velocities, the fact that $C_\perp\ne C_3$ implies that piezoelectric couplings intrinsically generate anisotropic Fermi velocities. Because we have $c_{ph}/v\ll 1$, however, this Fermi velocity renormalization is typically subleading against the dominant Coulomb term in Eq.~\eqref{vrenormCoulomb}. For simplicity, we here neglect the RG-generated anisotropy of the Fermi velocities and only focus on the mean value of the Fermi velocity defined as $v=(2v_\perp+v_3)/3$, cf.~Eq.~\eqref{isotropic}. Taking into account Eq.~\eqref{vrenormCoulomb} and using the number $\bar C=(2C_\perp+C_3)/3$, with $\bar C\simeq 0.47$ for TaAs, we obtain another correction to the Fermi velocity which must be added to Eq.~\eqref{vrenormCoulomb}, \begin{equation}\label{vrenormCoulomb2} \delta v' =-\frac{g_e^2}{4\pi} \frac{\bar C c_{ph}}{\rho_0 v} \left(\frac{g_{ph} e_{33}}{c_{ph}}\right)^2 d\ell. \end{equation} Since $\bar C>0$, the piezoelectric corrections tend to decrease the Fermi velocities. \subsubsection{Piezoelectric vertex correction}\label{sec3a5} One-loop vertex corrections do arise from the piezoelectric coupling, see the diagram in Fig.~\ref{fig5}(e). This diagram is studied in detail in App.~\ref{app3}. We obtain a charge renormalization corresponding to the RG flow of the coupling $g_e$ in Eq.~\eqref{gedef}. For the $4mm$ crystal class, and using again Eqs.~\eqref{isotropic} and \eqref{isophon}, we obtain \begin{equation}\label{deltacharge} \delta g_e =\frac{C_0}{4\pi\rho_0}\frac{c_{ph}}{v} \left( \frac{ g_e g_{ph} e_{33}}{c_{ph}} \right)^2 g_e d\ell, \end{equation} with $C_0\simeq 1.40$ for TaAs. Note the factor of $c_{ph}/v\ll1$, which is a manifestation of Migdal's theorem for WSMs \cite{Roy2014}. The fact that the same coefficient $C_0$ governs both the vertex correction and the field renormalization, see Eq.~\eqref{fieldren1}, is due to a Ward identity for electron-phonon interactions \cite{Engelsberg}. We also have $\delta g_{ph}=0$ because there are no loop corrections in this vertex. \subsection{RG equations}\label{sec3b} We now collect the results of Sec.~\ref{sec3a}. The one-loop RG equations are then given by \begin{eqnarray} \nonumber \frac{dZ_\psi}{d\ell}&=&-C_0\frac{c_{ph}}{v} \frac{g_e^2}{4\pi v}\frac{g_{ph}^2e_{33}^2}{\rho_0c_{ph}^2} Z_\psi ,\\ \nonumber \frac{d Z_\varphi}{d\ell}&=&-\frac{Ng_e^2}{6\pi^2 v} Z_\varphi,\\ \frac{dv}{d\ell}&=& \frac{g_e^2}{6\pi^2} \left[ 1 - \frac{3\pi (C_0+\bar C) }{2} \frac{c_{ph}}{v}\frac{g_{ph}^2e_{33}^2}{\rho_0c_{ph}^2} \right]\label{RGfull} ,\\ \nonumber \frac{d g_e}{d\ell}&=&-\frac{Ng_e^3}{12\pi^2 v},\\ \nonumber \frac{d g_{ph}}{d\ell}&=&-\frac{Ng_e^2 g_{ph}}{12\pi^2 v}. \end{eqnarray} We note that on effective length scales beyond the mean free path, disorder effects could modify the above RG equations. For $g_{ph}=0$, we recover the RG equations in the absence of phonons, in which case the Coulomb vertex $g_e$ is marginally irrelevant and the Fermi velocity increases monotonically as we lower the energy scale. For $g_{ph}\neq0$, the vertex correction $\delta g_e/g_e$ due to the piezoelectric interaction gets canceled by the field renormalization $\delta Z_\psi/Z_\psi$ and $g_e$ still decreases with the RG flow. Solving the RG equations numerically with the initial condition set by the parameters for TaAs, we obtain the flow diagram in Fig. \ref{figZ}(a). However, we find that an instability can arise if the piezoelectric interaction is strong enough to reverse the flow of the Fermi velocity and make it vanish (or become of the order of the phonon velocity) at some finite energy scale. A rough estimate of the condition for this instability is obtained by imposing that $dv/d\ell$ must be negative at the beginning of the RG flow. This requires $\bar\gamma>\frac{2 w_\gamma}{3\pi (C_0+\bar C)}\frac{v}{c_{ph}}$. While $C_0$, $\bar C$ and $w_\gamma$ are constants of order unity, the factor of velocity ratio $v/c_{ph}\gg1$ pushes the critical $\bar\gamma$ to a higher value than estimated within the static approximation. Integrating the RG equations numerically, we find that the renormalized velocity does vanish when we enhance the piezoelectric coefficient such that $\bar\gamma \simeq 75$, as shown in Fig. \ref{figZ}(b). Therefore, this RG analysis suggests that retardation effects make the WSM phase more stable against a superconducting transition. \begin{figure} \begin{centering} \includegraphics[width=.9\columnwidth]{f6.pdf} \par\end{centering} \caption{\label{figZ} Renormalized Fermi velocity $v(\ell)$ and Coulomb coupling $g_e(\ell)$ as functions of the RG flow parameter $\ell=\ln(\Lambda_0/\Lambda)$. (a) Flow diagram obtained using the estimated parameters for TaAs, corresponding to $\bar\gamma=0.20$, but considering $2N=4$ Weyl nodes. (b) Flow diagram obtained by enhancing the piezoelectric coefficient $e_{33}$ to reach $\bar \gamma\simeq 75$. Here we stop the RG flow at the scale where the Fermi velocity vanishes, at which point the WSM becomes unstable.} \end{figure} \section{Phase diagram and superconductivity}\label{sec4} We next perform a self-consistent mean-field analysis to locate superconducting regions in the phase diagram within the static approximation for the total interaction. We develop the mean-field approach in Sec.~\ref{sec4b} and study the stability of superconducting phases with singlet or triplet pairing. For small $\bar \gamma$, the WSM phase remains stable but will be characterized by a sizeable quasi-particle decay rate $\Gamma$. We determine the dependence of $\Gamma$ on temperature and on the energy of the quasi-particle in Sec.~\ref{sec4c}. \subsection{Mean field theory}\label{sec4b} Since pairing involves time-reversed partner states, we consider the effective inter-node e-e interaction potential $V_{\rm tot}(\mathbf q)$ in Eq.~\eqref{vtot} for a pair of nodes ($h=1,2$) that are linked by time reversal. The Hamiltonian is then given by \begin{eqnarray}\label{heff1} H_{\textrm{eff}}&=&\sum_{h=1}^2 \sum_{\mathbf p}\psi^\dagger_h(\mathbf p) \left(v \mathbf p\cdot\boldsymbol\sigma\right) \psi^{\phantom\dagger}_h(\mathbf p) \\ \nonumber &+&\frac1{V}\sum_{\mathbf k,\mathbf p,\mathbf q}V_{\textrm{tot}}(\mathbf q)\psi^\dagger_1(\mathbf p+\mathbf q)\psi^{\phantom\dagger}_1(\mathbf p)\psi^\dagger_2(\mathbf k-\mathbf q)\psi^{\phantom\dagger}_2(\mathbf k). \end{eqnarray} We assume the static approximation for the total e-e interaction, as done in the standard BCS theory for the normal-metal-superconductor transition. While phonon-induced retardation effects could be included within Eliashberg theory, we here explore only the static case defined by Eq.~\eqref{heff1}. We expect to encounter a superconducting phase for $\bar \gamma>1$, see Eq.~\eqref{bargamma}, where the effective interaction $V_{\rm tot}$ will be attractive in all directions and the order parameter should describe $s$-wave singlet pairing. However, it is worth mentioning that the breaking of spin-rotational invariance by spin-orbit coupling in WSMs blurs the distinction between singlet and triplet pairing \cite{Cho}. In fact, a mixing of singlet and triplet components is generic for non-centrosymmetric superconductors \cite{Sigrist,Yip}. With this caveat in mind, we now implement the mean-field approximation for $H_{\rm eff}$ in Eq.~\eqref{heff1}. We consider a generic spin-matrix order parameter, $\boldsymbol\Xi(\mathbf k)$, defined by \begin{equation}\label{BigXi} \left\langle\psi_{1\sigma}(\mathbf k)\psi_{2\sigma'}(-\mathbf k+\mathbf q)\right\rangle=\delta_{\mathbf q,0} \, \left[{\bm\Xi}(\mathbf k)i\sigma_2\right]_{\sigma\sigma'}. \end{equation} The gap function then also corresponds to a complex-valued spin matrix, \begin{equation}\label{DeltaXi} {\bm\Delta}(\mathbf p)=-\frac1V\sum_{\mathbf k}V_{\textrm{tot}}(\mathbf p-\mathbf k){\bm\Xi}(\mathbf k). \end{equation} Using four-component Nambu spinor operators \cite{MahanBook}, \begin{equation} \Psi(\mathbf p)=\left(\begin{array}{c}\psi_1(\mathbf p)\\ i\sigma_2\psi^\dagger_2(-\mathbf p)\end{array}\right), \quad \psi_h(\mathbf p)= \left(\begin{array}{c} \psi_{h,\uparrow}(\mathbf p)\\ \psi_{h,\downarrow}(\mathbf p)\end{array}\right), \label{Nambu} \end{equation} the standard mean-field decoupling scheme yields the Bogoliubov-de-Gennes (BdG) Hamiltonian \begin{eqnarray}\nonumber H_{\textrm{BdG}}&=& \sum_{\mathbf p} \left(\Psi^{\dagger}(\mathbf p) {\cal H}_{\rm BdG}(\mathbf p) \Psi(\mathbf p) +\textrm{Tr}\left[{\bm\Delta}^\dagger(\mathbf p){\bm\Xi}(\mathbf p)\right]\right),\\ \label{BdG} && {\cal H}_{\rm BdG}(\mathbf p) = \left(\begin{array}{cc} v\boldsymbol \sigma\cdot\mathbf p & {\bm\Delta}(\mathbf p)\\ {\bm\Delta}^\dagger(\mathbf p)&-v\boldsymbol \sigma\cdot\mathbf p \end{array}\right). \end{eqnarray} We will now examine the conditions for superconducting phases with singlet vs triplet pairing. \subsubsection{Singlet pairing}\label{sec4b1} For the case of singlet pairing, we write ${\bm\Delta}(\mathbf p)=\Delta_0(\mathbf p)\sigma_0$ in a gauge where the scalar function $\Delta_0(\mathbf p)$ is real valued. Diagonalizing ${\cal H}_{\rm BdG}(\mathbf p)$ in Eq.~\eqref{BdG}, one finds the eigenvalues $\pm E_s(\mathbf p)$ with $E_s(\mathbf p)=\sqrt{v^2 {\bf p}^2+\Delta_0^2(\mathbf p)}$. The gap equation then follows from Eq.~\eqref{DeltaXi} by noting that Eq.~(\ref{BigXi}) is solved by a spin-isotropic matrix, ${\bm \Xi}(\mathbf k)= \frac{\Delta_0(\mathbf k)}{2E_s(\mathbf k)}\sigma_0$. Using the averaged interaction potential in Eq.~\eqref{vtotav} with $\bar \gamma$ in Eq.~\eqref{bargamma}, the solution follows by assuming a constant gap function, $\Delta_0(\mathbf k)=\Delta_0$, corresponding to $s$-wave pairing. For $\Delta_0\ne 0$, with Eq.~\eqref{finestructure} we arrive at the gap equation \begin{eqnarray}\nonumber 1&=&-\frac{1-\bar \gamma}{4\pi^2}\int_0^{b}dk\,k^2\frac{g_e^2}{ k^2\sqrt{v^2k^2+\Delta_0^2}} \\&=& -\frac{\alpha_{\rm eff}}{\pi}\ln\left(\frac{2vb}{\Delta_0}\right),\label{solutiongap} \end{eqnarray} where the large-momentum cutoff $b$ corresponds to the momentum separation between different Weyl nodes. For $\alpha_{\rm eff}<0$, corresponding to $\bar \gamma>1$, we then find the isotropic gap \begin{equation}\label{delta0singlet} \Delta_0=2vb\, e^{-\pi/ |\alpha_{\rm eff}|}. \end{equation} Assuming that $\Delta_0$ has the same sign at both Weyl nodes \cite{Hosur2013,Meng}, we obtain a topologically trivial gapped superconductor with conventional $s$-wave singlet pairing. However, it is worth noting again that a finite gap emerges even though $\nu_F$ vanishes at the Fermi level. Technically, the $1/{\bf k}^2$ momentum dependence of the long-range interaction potential compensates the density-of-states factor $k^2$ in Eq.~(\ref{solutiongap}). \subsubsection{Nodal-line triplet pairing} \label{sec4b2} We next investigate the possibility of other superconducting phases at $\bar\gamma<1$, where the effective interaction strength is repulsive along certain directions but a significant attractive component exists near the $q_3=0$ plane, see Fig.~\ref{fig3}(b). A general superconducting order parameter can be written as \begin{equation} \label{ordpar} {\mathbf\Delta}(\mathbf k)=\Delta_0(\mathbf k)\sigma_0+\mathbf a(\mathbf k)\cdot \boldsymbol \sigma, \end{equation} where $\Delta_0(\mathbf k)$ is a real scalar function and $\mathbf a(\mathbf k)$ is a complex vector field. For ${\mathbf a}\ne 0$, the superconducting phase has a triplet pairing component \cite{Cho}. We require that the BdG Hamiltonian \eqref{BdG} preserves time-reversal symmetry, which implies the conditions \begin{equation} \Delta_0(-\mathbf k)=\Delta_0(\mathbf k),\quad \mathbf a^*(-\mathbf k)=-\mathbf a(\mathbf k).\label{cond1} \end{equation} We then expand Eq.~\eqref{ordpar} to first order in $\mathbf k$, where time-reversal symmetry and Eq.~\eqref{cond1} imply \begin{equation}\label{mftpar} \Delta_0(\mathbf k)=\Delta_0 ,\quad {\mathbf a}({\mathbf k})= {\bm M}\cdot {\mathbf k} + i {\mathbf a}_2 . \end{equation} Here ${\bm M}$ is a real $3\times 3$ matrix and the vector ${\mathbf a}_2$ also has real entries. Next, in order to reduce the number of mean-field parameters, we take into account global spin and orbital rotation symmetry around the $z$-axis for tetragonal crystal symmetry. In this argument, we assume that these symmetries are approximately realized even when expanding around the Weyl nodes. This approximation becomes exact if the Weyl points are separated along the $z$-axis in momentum space. Indeed, a state that minimizes the energy should take advantage of the anisotropy in the effective interaction \eqref{vtot}. We thus take $\mathbf a_2=a_2\hat {\mathbf z}$ and ${\bm M}={\rm diag}(a_{\perp},a_{\perp},a_{\parallel})$, leaving us with only four mean-field parameters in Eq.~\eqref{mftpar}. For $\Delta_0=0$, the eigenvalues of ${\cal H}_{\rm BdG}(\mathbf k)$ are given by $\pm E_t(\mathbf k)$ with \begin{eqnarray}\label{spectrumnodal} E^2_t(\mathbf k)&=& v^2 \mathbf k ^2+a_{\perp}^2 \mathbf k_\perp^2+a_{\parallel}^2 k_3^2+a_2^2 \\ \nonumber &\pm& 2 |\mathbf k_\perp|\sqrt{(v^2+ a_{\perp}^2) a_2^2+v^2 k_3^2( a_{\perp}-a_{\parallel})^2} . \end{eqnarray} For $a_2=0$, the energy only vanishes at $\mathbf k=0$, and each of the original Weyl nodes splits into two Bogoliubov-Weyl nodes, similar to the result of Ref.~\cite{Meng} for pairing between nodes with opposite chirality. For $a_2\neq0$, the spectrum instead exhibits a \emph{nodal ring} in the $k_3=0$ plane, \begin{equation} |\mathbf k_\perp|=\frac{|a_2|}{\sqrt{v^2+a_{\perp}^2}},\quad k_3=0.\label{nodalring} \end{equation} For a general discussion of non-centrosymmetric nodal superconductors, see Refs.~\cite{Armitage2018,Schnyder,Chiu}. Interaction-induced instabilities in nodal-line WSMs have also recently been studied, e.g., in Ref.~\cite{Volkov2018}. \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{f7.pdf} \par\end{centering} \caption{\label{fig7} Schematic representation of the dispersion relations of the two bands for Bogoliubov quasiparticles. Here we set the mean-field parameters $a_\parallel=a_\perp=0$ and plot the dispersion for $k_3=0$. (a) For $a_2=\Delta_0=0$, the Weyl nodes conjugated by time reversal symmetry are represented as two degenerate Bogoliubov-Weyl nodes. (b) For $\Delta_0 =0$ but $a_2\neq0$, the spectrum is gapless along a nodal line located in the $k_3=0$ plane. (c) For $a_2\neq0$ and $\Delta_0\neq0$, the spectrum is fully gapped. } \end{figure} The spectrum in Eq.~(\ref{spectrumnodal}) shows that the parameters $a_{\parallel}$ and $a_{\perp}$ mainly just renormalize Fermi velocities, without introducing essential new physics. In order to get tractable analytical expressions, we thus consider the case $a_{\parallel}=a_{\perp}=0$ in what follows. In particular, we test whether it is energetically favorable to convert Weyl nodes into the nodal ring in Eq.~\eqref{nodalring} where the attractive interactions are most pronounced. To that end, self-consistency equations for the order parameters are derived as shown in App.~\ref{app4}. We arrive at the coupled equations \begin{eqnarray}\label{eqa2} a_2&=&\frac{\alpha_{\rm eff} a_2 }{4\pi} \int_0^\pi d\theta\sin\theta\left[\gamma(\theta)-1 \right]\\ \nonumber &\times& \left(1+\sin^2\theta\right)\ln\left(\frac{4v^2b^2}{\Delta_0^2+a_2^2\cos^2\theta}\right), \end{eqnarray} and \begin{equation}\label{generalgapeq} \Delta_0=\frac{\alpha_{\rm eff}\Delta_0}{4\pi} \int_0^\pi d\theta\sin\theta \left[\gamma(\theta)-1\right] \ln\left(\frac{4v^2b^2}{\Delta_0^2+a_2^2\cos^2\theta}\right). \end{equation} Note that Eq.~\eqref{eqa2} differs from Eq.~\eqref{generalgapeq} by the factor $(1+\sin^2\theta)$ in the integrand. This factor enhances the contribution from $\theta\approx \pi/2$ where $\gamma(\theta)$ has its maximum. This observation suggests the existence of a parameter window where Eq. (\ref{eqa2}) has a solution with $a_2\neq0$ while $\Delta_0=0$ is the only solution to Eq.~(\ref{generalgapeq}). In App.~\ref{app4}, we confirm that an intermediate parameter regime exists, $\bar\gamma'<\bar\gamma<1$, where such a solution is stable, at least in the absence of disorder. Using TaAs parameters, we find $\bar\gamma'\simeq 0.91$. The respective value for the order parameter $a_2$ is given by Eq.~\eqref{a2gap}. Our mean-field approach suggest that superconductivity will be absent for $\bar\gamma< \bar\gamma'$, where the WSM phase presumably remains stable. We study the quasi-particle lifetime in this regime in Sec.~\ref{sec4c} below. In the intermediate regime $\bar\gamma'<\bar\gamma<1$, however, the system becomes a gapless triplet superconductor with inter-node pairing, where the Weyl nodes split and form a nodal ring located in the $k_3=0$ plane. Finally, for $\bar\gamma>1$, the system enters a fully gapped superconducting phase with $s$-wave singlet pairing, see Sec.~\ref{sec4b1}. The general picture is illustrated in Fig. \ref{fig7}. We emphasize that all these phase transitions can already happen for small absolute values of the fine structure constant $\alpha=g_e^2/(4\pi v)$, within the perturbatively accessible regime. \subsubsection{Other competing phases} \label{sec4b3} So far we have discussed superconducting pairing with zero Cooper pair momentum in time-reversal-symmetric WSMs, where a pair of nodes at opposite momenta is conjugated by time reversal. By contrast, in inversion-symmetric WSMs, the opposite chirality of nodes entails that states with momentum $\mathbf k$ and $-\mathbf k$ do not necessarily have opposite spin. In such cases, the type of superconducting order is less clear because pairing between parity-reversed nodes leads to a gapless superconductor \cite{Meng,Cho,Li}. The authors of Refs.~\cite{Cho,Wei} have argued that a fully gapped Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state with intra-node pairing has lower energy than the gapless state. On the other hand, in Ref.~\cite{Bednik} an odd-parity BCS state with lower energy than the FFLO state was found. Using our model, pairing between nodes of opposite chirality can also be studied and could allow for a nodal FFLO-type superconducting phase. However, paired states are then not related by any symmetry, and we find it unlikely that a lower energy than for the BCS state in Sec.~\ref{sec4b1} can be achieved for $\bar\gamma>1$. Moreover, our attractive phonon-mediated interaction favors pairing between time-reversal-conjugated nodes. In Eq.~\eqref{piezoqdep}, phonons couple to the total electronic density, and projecting $H_{\rm pz}$ onto the Weyl nodes at low energies, we find the same coupling to all nodes. Nonetheless, the process of integrating out high-energy modes could lift this degeneracy, and one pair of Weyl nodes may ultimately have a stronger coupling. The effective e-e interaction used as input in Eq.~\eqref{heff1} will then favor a pairing of the time-reversal-conjugated nodes with the strongest coupling, as opposed to some other combination of nodes. Let us also comment on the possibility of charge density wave (CDW) phases, see Ref.~\cite{Wang}. For the model with short-range attractive interactions in Ref.~\cite{Wang}, a CDW instability can only occur at strong coupling. It is straightforward to adapt their calculation to our model with long-range attraction. The mean-field Hamiltonian for the CDW state is essentially as for our singlet pairing state in Sec.~\ref{sec4b1}. The difference is that the four-component spinor is defined as $(\psi_{1\uparrow},\psi_{1\downarrow},\psi_{2\uparrow},\psi_{2\downarrow})^t$, where 1 and 2 now refer to two nodes with opposite chirality and the order parameter is $\langle \psi^\dagger_1(\mathbf k)\psi^{\phantom\dagger}_2(\mathbf k)\rangle$. As this CDW order parameter breaks chiral symmetry, it leads to an axion insulator where the axion field is identified with the phase of the charge density wave. However, in our setting, this type of order depends on the interaction between nodes which are not related by any symmetry. By the above argument, this state should have higher energy than the BCS state. In addition, there may be other phases at intermediate coupling strength, $\bar\gamma\lesssim 1$. One particularly intriguing possibility concerns phases that break time-reversal symmetry spontaneously, e.g., a $p+ip$ superconductor. We leave the exploration of such phases to future work. \section{Quasi-particle lifetime}\label{sec4c} We next address the temperature and momentum dependence of the on-shell quasi-particle decay rate, $\Gamma(\mathbf p,T)$, caused by the piezoelectric e-ph coupling. We assume that $\bar\gamma$ is so small that interaction-induced instabilities are absent. We show below that in this WSM phase, the e-ph interaction is responsible for a rather large quasi-particle decay rate, scaling as $\Gamma\sim T/\ln(b/|\mathbf p|)$ at low-to-intermediate temperatures with $T\gg c_{ph}|\mathbf p|$. To ease notation, we again employ Eqs.~\eqref{isotropic} and \eqref{isophon}. \subsection{General expression for the decay rate}\label{sec4c1} Diagrammatically, the lowest-order electronic self-energy is represented by Figs.~\ref{fig5}(a) and \ref{fig5}(d). Since the rainbow diagram in Fig.~\ref{fig5}(a) is a real-valued Hartree-Fock diagram, it does not contribute to the decay rate. The e-e interaction only produces a finite decay rate at higher orders and beyond the Hartree-Fock approximation. In order to compute $\Gamma$, we therefore study the self-energy due to e-ph interactions, $\Sigma_{ep}$, see Fig.~\ref{fig5}(d). The rate follows from the imaginary part of $\Sigma_{ep}(E,\mathbf p)$, which in turn is obtained by analytic continuation $i\omega\to E+i0^+$, see, e.g., Ref.~\cite{Giraud2011}. To be specific, we study the lifetime of a Weyl quasi-particle in the state $|\mathbf p,\mu=+\rangle$ with momentum $\mathbf p$, taken from the positive-energy ($\mu=+$) band. We consider the on-shell case, $E=v|\mathbf p|$. The quasi-particle decay rate is then given by \begin{equation} \label{lifetimedef} \Gamma ({\bf p}, T)= - 2 \ {\rm Im}\ \langle \mathbf p,+|\Sigma_{ep}(\mathbf p)|\mathbf p,+\rangle. \end{equation} Let us now make use of the results of Sec.~\ref{sec3a4} and App.~\ref{app2}. We first observe that the decay rate must vanish right at the Weyl point, $\Gamma(\mathbf p=0,T)=0$, since then momentum and energy conservation cannot be satisfied for any phonon momentum ${\mathbf q}\ne 0$. For $|\mathbf p|\ne 0$, it is convenient to rescale $\mathbf q=\xi |\mathbf p| \hat {\mathbf q}$ with the dimensionless parameter $\xi$. Denoting the integration angles by $\theta_{\mathbf q}$ and $\phi_{\mathbf q}$, and using $\langle \mathbf p,+|\boldsymbol\sigma\cdot \mathbf q|\mathbf p,+\rangle=|\mathbf q| \hat {\mathbf q}\cdot \hat{\mathbf p},$ we find \begin{widetext} \begin{eqnarray} \Gamma({\mathbf p},T) &=&\frac{ g_e^2g_{ph}^2 c_{ph}^2 |\mathbf p|}{8\pi^2\rho_0} \int_0^\infty d\xi \,\xi^2\int_0^\pi d\theta_{\mathbf q}\sin\theta_{\mathbf q}\int_{-\pi}^{\pi}d\phi_{\mathbf q}\, \gamma(\hat{\mathbf q}) \label{decayrate}\sum_{s=\pm} \Biggl \{ F_1^{(s)}(|\mathbf p|,\xi,\hat {\mathbf q}\cdot \hat{\mathbf p}) \times \\ &\times& \delta\left((v+s c_{ph} \xi)^2 -v^2\left(1+\xi^2+2\xi\hat{\mathbf q}\cdot\hat{\mathbf p}\right)\right) \nonumber + \ F^{(s)}_2(|\mathbf p|,\xi,\hat {\mathbf q}\cdot \hat{\mathbf p}) \ \delta\left(v^2\left (1-s\sqrt{1+\xi^2+2\xi\hat {\mathbf q}\cdot \hat{\mathbf p}}\right)^2-c_{ph}^2\xi^2\right) \Biggr\}, \end{eqnarray} \begin{eqnarray} \label{Faux} F_1^{(s=\pm)}&=& g^{(s)}_1(\xi) \frac{n_B\left(s c_{ph} |\mathbf p|\xi\right)}{s c_{ph} \xi} \left[2v+s c_{ph} \xi+ v\xi\hat {\mathbf q}\cdot \hat{\mathbf p}\right],\\ \nonumber F_2^{(s=\pm)}&=&-g^{(s)}_2(\xi,\hat {\mathbf q}\cdot \hat{\mathbf p}) \frac{n_F\left(sv|\mathbf p|\sqrt{ 1+\xi^2+2\xi\hat {\mathbf q}\cdot \hat{\mathbf p}}\right)}{sv\sqrt{1+\xi^2+2\xi\hat {\mathbf q}\cdot \hat{\mathbf p}}}\left[v\sqrt{1+\xi^2+2\xi\hat {\mathbf q}\cdot \hat{\mathbf p}}+ sv\left(1+\xi \hat {\mathbf q}\cdot \hat{\mathbf p}\right)\right], \end{eqnarray} \end{widetext} with $\gamma(\hat {\bm q})$ in Eq.~\eqref{phononpotential}, $n_F(\omega)=1/(e^{\beta\omega}+1)$, $n_B(\omega)=1/(e^{\beta\omega}-1)$, and \begin{eqnarray} g_1^{(-)}&=&{\rm sgn}\left(1-\frac{c_{ph}\xi}{v}\right), \quad g_1^{(+)}=g_2^{(-)}=1, \\ \nonumber \quad g_2^{(+)}&=&\textrm{sgn}\left(1-\sqrt{1+\xi^2+2\xi\hat{\mathbf q}\cdot\hat{\mathbf p}}\right). \end{eqnarray} \subsection{Zero-temperature limit}\label{sec4c2} Let us first address the $T=0$ case, where only $F_1^{(-)}$ in Eq.~\eqref{Faux} yields a finite contribution to the decay rate, \begin{eqnarray} && \Gamma({\mathbf p}, T=0)=\frac{ g_e^2g_{ph}^2 c_{ph}|{\mathbf p}| }{4\pi^2\rho_0v} \int_0^\pi d\theta_{\mathbf q}\sin\theta_{\mathbf q}\int_{-\pi}^{\pi}d\phi_{\mathbf q} \nonumber\\ \label{zerotemp} && \qquad \times \ \gamma(\hat{\mathbf q})\left[1-(\hat {\mathbf q}\cdot \hat {\mathbf p})^2 \right]\Theta(-\hat{\mathbf q}\cdot \hat {\mathbf p}), \end{eqnarray} where $\Theta(x)$ is the Heaviside step function. Since the integral in Eq.~\eqref{zerotemp} is finite, we conclude that the $T=0$ rate scales as $\Gamma \sim |\mathbf p|$ when approaching the Weyl point. \subsection{Finite temperatures} \label{sec4c3} Next we consider low but finite temperatures in the regime \begin{equation}\label{Tregime} c_{ph}|\mathbf p|\ll T \ll \textrm{min}(v|\mathbf p|,c_{ph} b). \end{equation} The dominant contributions to the decay rate \eqref{decayrate} then stem from the $F_1^{(\pm)}$ terms in Eq.~(\ref{decayrate}), where the Bose factors can be approximated by $n_B\simeq \pm T/(c_{ph}|\mathbf p|\xi)$, respectively. We then obtain \begin{eqnarray}\label{singular} \Gamma({\mathbf p},T)&=& \frac{ g_e^2g_{ph}^2 T}{4\pi^2\rho_0 v} \int_0^\pi d\theta_{\mathbf q}\sin\theta_{\mathbf q}\int_{-\pi}^{\pi}d\phi_{\mathbf q}\\ \nonumber &\times& \frac{\gamma(\hat{\mathbf q})}{|\hat{\mathbf q}\cdot \hat{\mathbf p}|} \left[1-(\hat {\mathbf q}\cdot \hat{\mathbf p})^2\right]\Theta(-\hat{\mathbf q}\cdot \hat {\mathbf p}). \end{eqnarray} However, the integral (\ref{singular}) diverges logarithmically at the boundary of the hemisphere $\hat {\mathbf q}\cdot \hat{\mathbf p}<0$, corresponding to small-angle scattering processes with $\xi\to 0$. This infrared divergence is related to the long-range character of the piezoelectric interaction. Note that so far we have always assumed $T=0$, with the Fermi energy located right at the Weyl point. In that case, the unscreened Coulomb potential can be used. For the finite-temperature quasi-particle decay rate, we need to be more careful since also finite-energy states within an energy window of width $\approx T$ around the Weyl point are involved. For such states, the long-range Coulomb interaction is modified by dynamic screening \cite{Throckmorton,Kozii}. By taking into account screening, we now show that the above divergence is indeed removed. Dynamic screening of the Coulomb interaction can be included by replacing the permittivity according to \cite{MahanBook} \begin{equation} \varepsilon \mapsto \varepsilon(q) = \left( 1-\frac{g_e^2}{\mathbf q^2} \Pi(q) \right) \varepsilon, \end{equation} where $\Pi(q)$ is the polarization function. Within the standard random-phase approximation, we take $\Pi(q)$ to be the noninteracting polarization bubble, cf.~Fig.~\ref{fig5}(c), where the $T=0$ limit of the polarization function yields a good description for the temperature regime \eqref{Tregime}. A temperature dependence of the decay rate is then generated only by e-ph interactions (we note that disorder effects could modify our expressions). To obtain the dominant terms contributing to $\Gamma(\mathbf p,T)$ in this regime, the logarithmic on-shell term calculated in Refs.~\cite{Throckmorton,Abrikosov2} suffices, \begin{equation} \Pi(q)\simeq -\frac{N |\mathbf q|^2}{6\pi^2 v}\ln\left (2b/|\mathbf q|\right), \end{equation} where $b$ serves as large-momentum cutoff again. Note that two factors of $\varepsilon^{-1}$ appear in Eq.~\eqref{singular}, associated with either $g_e^2$ or $g_{ph}^2$. One can identify these two factors with the two wiggly lines in the self-energy diagram in Fig.~\ref{fig5}(d). Dressing both lines with the polarization bubble, we arrive at a modified version of Eq.~(\ref{singular}) which takes into account screening, \begin{eqnarray}\label{finite} &&\Gamma({\mathbf p},T)=\frac{g_e^2g_{ph}^2 T}{4\pi^2\rho_0 v} \int_0^\pi d\theta_{\mathbf q}\sin\theta_{\mathbf q}\int_{-\pi}^{\pi}d\phi_{\mathbf q}\\ &&\quad \times \nonumber \frac{\gamma(\hat{\mathbf q})}{|\hat {\mathbf q}\cdot \hat{\mathbf p}|} \frac{1-( \hat {\mathbf q}\cdot \hat{\mathbf p})^2 }{\left[1+\frac{N g_e^2}{6\pi^2 v}\ln\left(\frac{1}{|\hat{\mathbf q}\cdot \hat {\mathbf p}|}\frac{b}{|\mathbf p|}\right) \right]^2} \Theta(-\hat{\mathbf q}\cdot \hat {\mathbf p}). \end{eqnarray} Using $|\mathbf p|\ll b$, the regime \eqref{Tregime} is therefore characterized by a quasi-particle decay rate which scales as \begin{equation}\label{decayratefinal} \Gamma(\mathbf p,T)\sim \frac {T}{\ln(b/|\mathbf p|)} . \end{equation} We observe that $\Gamma({\mathbf p},T)$ vanishes for $|\mathbf p|\to 0$, as expected from kinematic constraints. However, the slow logarithmic scaling with $|\mathbf p|$, together with the linear-$T$ dependence, suggests that the quasi-particle lifetime of Weyl fermions is significantly reduced by the piezoelectric e-ph interaction, even when one stays in the very close vicinity of a Weyl point. \section{Concluding remarks}\label{sec5} In this work we have studied the long-range attractive interactions mediated by the piezoelectric electron-phonon coupling in undoped non-centrosymmetric Weyl semimetals. These interactions exhibit a significant angular dependence and compete with the repulsive Coulomb interactions. This competition is mainly governed by the dimensionless piezoelectric coupling strength $\bar\gamma$ in Eq.~\eqref{bargamma}. Within a static approximation for the effective e-e interaction, we find that for $\bar\gamma>1$ the attractive interactions outweigh the repulsive Coulomb part. We then predict a conventional BCS superconductor phase with spin-singlet $s$-wave pairing, even though the normal density of states vanishes at the Fermi level. We have performed a mean-field analysis to study this state in some detail. According to our rough estimate $\bar\gamma\approx 0.20$ for TaAs, see Sec.~\ref{sec4a}, the above BCS scenario is probably hard to encounter in TaAs. However, for $\bar\gamma<1$, other, and even more interesting, interacting phases may be stabilized. For example, our analysis in Sec.~\ref{sec4b} suggests that a nodal-ring gapless spin-triplet superconductor will be realized for intermediate values of $\bar \gamma$. Our RG analysis also shows that the critical values for $\bar \gamma$ where superconducting instabilities are found can be pushed upwards by retardation effects. For small $\bar\gamma$, we expect that the Weyl semimetal phase remains stable. Nonetheless, the piezoelectric coupling should leave a clear experimental trace in the quasi-particle decay rate at finite temperature. In particular, we find that this rate scales as $\Gamma\sim T/\ln (b/|\mathbf p|)$ at low-to-intermediate $T$. Albeit $\Gamma=0$ right at a Weyl point ($\mathbf p=0$), the weak logarithmic scaling with $|\mathbf p|$ suggests that the quasi-particle lifetime will be rather short even for very small (but finite) $|\mathbf p|$. In any case, we hope that future theoretical and experimental research will continue to study the interesting consequences of piezoelectric couplings in Weyl semimetals. \begin{acknowledgements} We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG), Grant No.~EG 96/12-1. R.G.P. thanks the Humboldt foundation for a Bessel award, enabling his extended stay in D\"usseldorf. Research at IIP-UFRN is supported by the Brazilian ministries MEC and MCTIC. \end{acknowledgements}
{ "timestamp": "2019-06-27T02:12:35", "yymm": "1904", "arxiv_id": "1904.06433", "language": "en", "url": "https://arxiv.org/abs/1904.06433" }
\subsection{Problem Formulation} The problem of planar sliding when there is torque transfer between the object and the manipulator, due to the dimension of the contact area, can be regarded as an extension of pushing problems~\citep{lynch1992manipulation,zhou2017fast}. Some of the previously suggested approaches can also be adapted to this problem~\citep{kao1992quasistatic,shi2017dynamic,chavan-dafle2018rss}. Nevertheless, there has not yet been a complete analysis of this problem per se. Thus, the aim of this article is to provide an adequate mathematical model of planar sliding using friction patches for the purpose of control and planning. \subsection{Summary of nomenclature} Capital bold letters denote matrices. Vectors are denoted by an arrow above a symbol, while small bold letters represent coordinate vectors. Scalars are typeset in roman. \begin{center} \begin{supertabular}{ll} $\vc{q}_o$ & generalized coordinates of the object \\ $\vc{q}_h$ & generalized coordinates of the patch \\ $\vc{\nu}_o$ & twist of the object wrt frame $\{\mathrm{O}\}$ \\ $\vc{\nu}_h$ & twist of the patch wrt frame $\{\mathrm{H}\}$ \\ $\vc{v}_h$ & linear velocity of the patch\\ $\vc{\nu}_{rel}$ & relative twist of the patch and the object in $\{\mathrm{H}\}$\\ $\vc{w}_o$ & wrenches exerted on the object wrt frame $\{\mathrm{O}\}$\\ $\vc{w}_h$ & wrenches exerted on the hand through the patch\\ $\vc{p}$ & position of the pivot point wrt frame $\{\mathrm{H}\}$\\ $\vc{m}$ & torque \\ $\mx{J}(\vc{r})$ & Jacobian for a point at relative coordinates $\vc{r}$ \\ $\mx{R}(\theta)$ & Rotation matrix along $z$ direction with $\theta\,$rad\\ $\mx{G}(\vc{q})$ & Jacobian for a frame with relative coordinates $\vc{q}$\\ $\mx{A}$ & LS of object-surface contact wrt frame $\{\mathrm{O}\}$\\ $\mx{B}$ & LS of hand-object contact wrt frame $\{\mathrm{H}\}$\\ $\hat{\mx{A}}$ & LS of object-surface contact wrt frame $\{\mathrm{H}\}$ \\ $\mx{\Phi}$ & principal sliding wrenches \\ $\mx{\Lambda}$ & generalized eigenvalues of $\hat{\mx{A}}$ and $\mx{B}$\\ \end{supertabular} \end{center} \subsection{Preliminaries} The generalized coordinates of the object are denoted by \begin{equation*} \vc{q}_o = \left[x_o,\, y_o,\, \theta_o \right]^T \end{equation*} and the twist and the wrenches expressed in the body-fixed frame are \begin{align*} \vc{\nu}_o &= \left[v_{xo},\, v_{yo},\, \omega_o \right]^T, \\ \vc{w}_o &= \left[f_{xo},\, f_{yo},\, m_o \right]^T. \end{align*} Similarly, the respective quantities for the friction patch are defined and denoted by the subscript $h$. The relative coordinates of frame $\{\mathrm{H}\}$ with respect to $\{\mathrm{O}\}$ can be written as \begin{align*} \vc{q}_{rel} &:= \left[x_{r},\, y_{r},\, \theta_{r} \right]^T \\ &\phantom{:}=\mx{R}(-\theta_o) \left(\vc{q}_{h} - \vc{q}_{o}\right) \end{align*} where $\mx{R}(\cdot)$ is the rotation matrix along the $z$-axis \begin{align} \label{eq:rotz} \mx{R}(\theta):= \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix}. \end{align} We also define for a position vector $\vc{r} = [x_r,\, y_r]^T$, \begin{align} \label{eq:jac} \mx{J}(\vc{r}) := \begin{bmatrix} 1 & 0 & -y_r \\ 0 & 1 & x_r \\ 0 & 0 & 1 \end{bmatrix} \end{align} with the property that $\mx{J}(\vc{r}_1)\mx{J}(\vc{r}_2) = \mx{J}(\vc{r}_1 + \vc{r}_2)$ for any $\vc{r}_1$ and $\vc{r}_2$. Accordingly, \begin{align*} \mx{J}^{-1}(\vc{r}) = \mx{J}(-\vc{r}) = \begin{bmatrix} 1 & 0 & y_r \\ 0 & 1 & -x_r \\ 0 & 0 & 1 \end{bmatrix}. \end{align*} \begin{prop}\label{thm:trans} The relation between planar twists $\vc{\nu}_p$ given in frame $\{\mathrm{P}\}$ and $\vc{\nu}_o$ given in frame $\{\mathrm{O}\}$, with relative coordinates $\vc{q}_{rel} = \left[\vc{r},\, \theta_{r} \right]^T$ is \begin{equation} \vc{\nu}_p= \mx{G} \vc{\nu}_o \end{equation} where \begin{equation} \mx{G}: = \mx{G}(\vc{q}_{rel}) = \mx{R}^T(\theta_r) \mx{J}\left( \vc{r} \right). \end{equation} Similarly, the wrenches are related according to \begin{equation} \vc{w}_o = \mx{G}^T \vc{w}_p. \end{equation} \end{prop} \begin{proof} By changing the point of reference, we find \begin{subequations} \begin{align} \vec{v}_p &= \vec{v}_o + \vec{\omega}_o \times \overrightarrow{OP}, \label{eq:shiftVel} \\ \vec{\omega}_p &= \vec{\omega}_o. \end{align} \end{subequations} Rewriting~\eqref{eq:shiftVel} in the frame $\{\mathrm{O}\}$, we obtain \begin{align*} \mx{R} \vc{v}_p &=\vc{v}_o + \omega_o \hat{\vc{k}} \times (x_r \hat{\vc{i}} + y_r \hat{\vc{j}} ) \\ &=\vc{v}_o + (x_r \hat{\vc{j}} - y_r \hat{\vc{i}}) \omega_o, \end{align*} where $\hat{\vc{i}}$, $\hat{\vc{j}}$, and $\hat{\vc{k}}$ denote unit coordinate vectors. Similarly for the forces, we have \begin{subequations} \begin{align} \vec{f}_o &= \vec{f}_p \\ \vec{m}_o &= \vec{m}_p + \overrightarrow{OP} \times \vec{f}_p. \label{eq:shiftForce} \end{align} \end{subequations} Rewriting~\eqref{eq:shiftForce} in the frame $\{\mathrm{O}\}$ results in \begin{align*} m_o \hat{\vc{k}} & =m_p \hat{\vc{k}} + (x_r \hat{\vc{i}} + y_r \hat{\vc{j}}) \times ( f_{xo} \hat{\vc{i}} + f_{yo} \hat{\vc{j}} ) \\ &=(m_p + x_r f_{yo} - y_r f_{xo} ) \hat{\vc{k}} . \end{align*} Additionally, the change of the frame from $\{\mathrm{P}\}$ to $\{\mathrm{O}\}$ requires \begin{align*} \vc{f}_o = \mx{R} \vc{f}_p. \end{align*} The proof is completed by rewriting the results in matrix form. \end{proof} Using the Coulomb model of friction between surfaces, the friction wrench with respect to point $o$ can be calculated as \begin{subequations} \label{eq:LSInts} \begin{align} \vc{f}_o &= - \int_D \dfrac{\vc{v}(\vc{r})}{\norm{\vc{v}(\vc{r})}} \mu_r p(\vc{r}) \,\mathrm{d}A, \label{eq:fo}\\ \vc{m}_o &= - \int_D \dfrac{(\vc{r}-\vc{o}) \times \vc{v}(\vc{r})}{\norm{\vc{v}(\vc{r})}} \mu_r p(\vc{r}) \,\mathrm{d}A, \label{eq:mo} \end{align} \end{subequations} where $p(\vc{r})$ denotes the pressure and $\vc{v}(\vc{r})$ denotes the relative linear velocity between sliding surfaces at position $\vc{r}$. The integral is calculated over the area $D$. Based on the assumed quadratic model of the limit surfaces, the relation between~\eqref{eq:fo} and~\eqref{eq:mo} can be approximated by an implicit function \begin{align} H(\vc{w}) := \vc{w}^T \mx{A} \vc{w} = 1, \label{eq:LS} \end{align} for a positive definite matrix $\mx{A} \in \mathbb{R}^{3\times 3}$. The corresponding twist is parallel to the gradient of $H(\vc{w})$. Thus, \begin{align} \vc{\nu} &= -k' \nabla H(\vc{w}) \nonumber \\ &= - k \mx{A} \vc{w}, \quad k \geq 0. \label{eq:gradH} \end{align} Note that for a given $\vc{w}$ applied to an object sliding on a surface, there will be no relative motion if \begin{align*} H(\vc{w}) < 1, \end{align*} and the object will be accelerating if $H(\vc{w})$ is larger than one. By combining~\eqref{eq:LS} and~\eqref{eq:gradH}, it is possible to eliminate $k$ and hence to find wrenches as a function of the twist \begin{align} \vc{w} = - \dfrac{\mx{A}^{-1} \vc{\nu}}{\sqrt{\vc{\nu}^T \mx{A}^{-1} \vc{\nu}}}. \label{eq:kLS} \end{align} \begin{prop} Assume that the limit surface calculated with respect to frame $\{\mathrm{O}\}$ can be represented by \begin{equation} \vc{w}_o^T \mx{A} \vc{w}_o = 1, \end{equation} where $\mx{A}$ is a positive definite matrix. Then, the limit surface with respect to frame $\{\mathrm{P}\}$, which has the relative coordinates $\left[\vc{r},\, \theta_{r} \right]^T$ is \begin{equation} \vc{w}_p^T \hat{\mx{A}} \vc{w}_p = 1, \end{equation} where \begin{equation} \hat{\mx{A}} = \mx{G} \mx{A} \mx{G}^T \end{equation} is a positive definite matrix and $\mx{G} = \mx{R}^T(\theta_r)\mx{J}\left( \vc{r} \right)$. \end{prop} \begin{proof} The result is achieved by the direct application of Proposition~\ref{thm:trans}. For the positive definiteness, note that \begin{align*} \vc{w}^T \hat{\mx{A}} \vc{w} = (\mx{G}^T \vc{w})^T \mx{A} (\mx{G}^T \vc{w}) \geq 0. \end{align*} Since $\mx{G}$ is full rank, $\mx{G}^T \vc{w}$ is zero if and only if $\vc{w} = \vc{0}$. Consequently, the matrix $\hat{\mx{A}}$ is positive definite. \end{proof} The following theorem shows that a limit surface characterized by any positive definite matrix can be assumed as a diagonal matrix with respect to a frame assigned at the COP. \begin{thm}\label{thm:decomp} Any positive definite matrix $\mx{A} \in \mathbb{R}^{3\times 3}$ can be decomposed as \begin{align} \mx{A} = \mx{R}^T \mx{J} \mx{\Lambda} \mx{J}^T \mx{R}, \label{eq:decomp} \end{align} where $\mx{\Lambda}$ is a diagonal matrix, and $\mx{R}$ and $\mx{J}$ are rotation and Jacobian matrices as defined in~\eqref{eq:rotz} and~\eqref{eq:jac}, respectively. \end{thm} \begin{proof} See appendix~\ref{sec:appx2} \end{proof} \subsection{Force and velocity relations} The limit surfaces and the relation between the friction wrench exerted on the hand through the patch $\mx{w}_h $ and the wrench affecting the object $\mx{w}_o $ are: \begin{align} \vc{w}_o^T \mx{A} \vc{w}_o &= 1, \label{eq:HWo}\\ \vc{w}_h^T \mx{B} \vc{w}_h &= 1, \label{eq:HWh} \\ \vc{w}_o - \mx{G}^T \vc{w}_h &= \vc{0}, \label{eq:forceBalance} \end{align} where $\mx{G} := \mx{G}(\vc{q}_{rel})$ denotes the Jacobian corresponding to the relative coordinates of frame $\{\mathrm{H}\}$ with respect to $\{\mathrm{O}\}$. Equation~\eqref{eq:forceBalance} is derived from the fact that the wrenches on the object sum to zero under the assumption of quasi-static manipulation, i.e., the inertial forces are negligible. Additionally, we have these velocity relations \begin{align} \vc{\nu}_o &= - k_1 \mx{A} \vc{w}_o, \quad k_1 \geq 0 \label{eq:oVelForce}\\ \vc{\nu}_{rel} &= - k_2 \mx{B} \vc{w}_h,\quad k_2 \geq 0 \label{eq:hVelForce}\\ \vc{\nu}_{rel} &= \vc{\nu}_h - \mx{G} \vc{\nu}_o, \label{eq:Vrel} \end{align} where $\vc{\nu}_{rel}$ denotes the relative twist of the patch with respect to the object expressed in $\{\mathrm{H}\}$. Equations~\eqref{eq:oVelForce} and~\eqref{eq:hVelForce} are the counterparts of~\eqref{eq:gradH} while~\eqref{eq:Vrel} is obtained by first transforming $\vc{\nu}_o$ to the frame of the patch and then subtracting it from the twist of the patch. \subsection{Solution} Using~\eqref{eq:forceBalance} it is possible to rewrite~\eqref{eq:HWo} as \begin{align} \vc{w}_h^T \hat{\mx{A}} \vc{w}_h = 1, \label{eq:HWot} \end{align} where $\hat{\mx{A}} = \mx{G} \mx{A} \mx{G}^T$ characterizes the limit surface of the object at frame $\{\mathrm{H}\}$. By solving the generalized eigenvalue problem $\mx{B} \mx{\Phi} = \hat{\mx{A}} \mx{\Phi} \mx{\Lambda}$, we can simultaneously diagonalize $\hat{\mx{A}}$ and $\mx{B}$ such that \begin{align*} \mx{\Lambda} &= \mx{\Phi}^T \mx{B} \mx{\Phi}, \\ \mx{I} &= \mx{\Phi}^T \hat{\mx{A}} \mx{\Phi}, \end{align*} where $\mx{I} \in \mathbb{R}^{3\times 3}$ denotes the identity matrix. Thus, by applying $\vc{w}_h = \mx{\Phi} \vc{w}$ we transform~\eqref{eq:HWh} and~\eqref{eq:HWot} to \begin{subequations} \label{eq:intersecES} \begin{align} \vc{w}^T \mx{\Lambda} \vc{w} &= 1, \label{eq:ellips}\\ \vc{w}^T \vc{w} &= 1. \label{eq:sphere} \end{align} \end{subequations} Moreover, by subtracting~\eqref{eq:sphere} from \eqref{eq:ellips}, we find the normal form \begin{subequations} \label{eq:intersec} \begin{align} \vc{w}^T \mx{C} \vc{w} &= 0, \label{eq:intersec1}\\ \vc{w}^T \vc{w} &= 1, \label{eq:intersec2} \end{align} \end{subequations} where $\mx{C} := \mx{\Lambda} - \mx{I}$ is a diagonal matrix. Note that if there is a solution to~\eqref{eq:intersec}, it is possible to recover the wrenches at the patch and the object frames using the following relations \begin{subequations} \label{eq:transform} \begin{align} \vc{w}_h &= \mx{\Phi} \vc{w}, \label{eq:Wh}\\ \vc{w}_o &= \mx{G}^T \vc{w}_h \end{align} \end{subequations} In view of~\eqref{eq:intersecES}, feasible wrenches $\vc{w}$ lie on the intersection of an ellipsoid with the unit sphere. Accordingly, there are several possible cases: \begin{itemize} \item The limit surface of the object lies entirely inside the limit surface of the patch, hence $\mx{C} \prec 0$. Since any required forces for sliding can be provided through the patch, the hand sticks to the object ($\vc{\nu}_{rel} = \vc{0}$). The only possible mode in this case is called \emph{sticking}. \item The limit surface of the patch is entirely contained in the limit surface of the object, hence $\mx{C} \succ 0$. In this case, the hand cannot provide enough force through the patch for sliding the object against the surface, hence the object remains still and the patch slides against it ($\vc{\nu}_{o} = \vc{0}$). We call the corresponding mode \emph{slipping}. \item Otherwise, there exists a $\vc{\nu}_h$ for which the hand can move the object while allowing it to pivot. We call this mode \emph{pivoting}. An example in which pivoting is possible is illustrated in Figure~\ref{fig:interSphere}. \end{itemize} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{03_interEllips.pdf} \caption{Visualization of Eq.~\eqref{eq:intersecES} for an example where pivoting mode is possible. The vector $\vc{w}$ is unitless.} \label{fig:interSphere} \end{center} \end{figure} Using the transformations~\eqref{eq:transform}, it is also possible to rewrite ~\eqref{eq:oVelForce}--\eqref{eq:Vrel} to obtain \begin{align} \mx{\Phi}^T \mx{G} \vc{\nu}_o &= - k_1 \vc{w}, \quad k_1 \geq 0 \label{eq:oVelForceT} \\ \mx{\Phi}^T \vc{\nu}_{rel} &= - k_2 \mx{\Lambda} \vc{w},\quad k_2 \geq 0 \label{eq:rVelForceT}\\ \vc{\nu}_{rel} &= \vc{\nu}_h - \mx{G} \vc{\nu}_o. \label{eq:VrelT} \end{align} From~\eqref{eq:oVelForceT}--\eqref{eq:VrelT}, we conclude \begin{align*} \tilde{\vc{\nu}}_h = - (k_1 \mx{I} + k_2 \mx{\Lambda}) \vc{w}, \end{align*} where $\tilde{\vc{\nu}}_h := \mx{\Phi}^T \vc{\nu}_h$. Let us define $\alpha = \dfrac{k_2}{k_1} \geq 0$. Accordingly, \begin{align} \vc{w} = -\dfrac{1}{k_1} (\mx{I} + \alpha \mx{\Lambda})^{-1} \tilde{\vc{\nu}}_h. \label{eq:wvh} \end{align} Substituting~\eqref{eq:wvh} into~\eqref{eq:intersec1} results in \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} (\mx{I} + \alpha \mx{\Lambda})^{-2} \tilde{\vc{\nu}}_h = 0, \label{eq:alpha4vh} \end{align} which is equivalent to \begin{multline} c_1 \left(\dfrac{\tilde{v}_{xh}}{\alpha \lambda_1 + 1} \right)^2 + c_2 \left(\dfrac{\tilde{v}_{yh}}{\alpha \lambda_2 + 1} \right)^2 + c_3 \left(\dfrac{\tilde{\omega}_{h}}{\alpha \lambda_3 + 1} \right)^2 \\= 0, \label{eq:solAlpha} \end{multline} where $c_i$ and $\lambda_i$ are the diagonal elements of $\mx{C}$ and $\mx{\Lambda}$, respectively. Equation~\eqref{eq:solAlpha} can be solved for $\alpha$. Afterwards, by substituting~\eqref{eq:wvh} into~\eqref{eq:intersec2}, it is possible to calculate $k_1$. A relation between $\vc{\nu}_o$ and $\vc{\nu}_h$ can also be found by substituting~\eqref{eq:wvh} back to~\eqref{eq:oVelForceT} \begin{align*} {_h}\tilde{\vc{\nu}}_o = (\mx{I} + \alpha \mx{\Lambda})^{-1} \tilde{\vc{\nu}}_h. \end{align*} After some algebraic manipulations, we have \begin{align} {_h}\vc{\nu}_o &= \hat{\mx{A}} (\hat{\mx{A}} + \alpha \mx{B})^{-1} \vc{\nu}_h \nonumber \\ &= (\mx{I} + \alpha \mx{B} \hat{\mx{A}}^{-1})^{-1} \vc{\nu}_h, \label{eq:hvovh} \end{align} where ${_h}\vc{\nu}_o := \mx{G} \vc{\nu}_o$ is the twist of the object expressed in $\{\mathrm{H}\}$. Using~\eqref{eq:VrelT}, we find the relative twist to be \begin{align} \vc{\nu}_{rel} &= \left(\mx{I} + (\alpha \mx{B} \hat{\mx{A}}^{-1})^{-1} \right)^{-1} \vc{\nu}_h \nonumber \\ &= \alpha (\alpha \mx{I} + \hat{\mx{A}}\mx{B}^{-1})^{-1} \vc{\nu}_h. \label{eq:vrvh} \end{align} When the patch slides against the object, there is a pivot point, which can be determined by finding the point where the object and the patch have the same velocity. In other words, the \emph{pivot point} is the instantaneous center of rotation (COR) between the patch and the object. Using the velocity transfer relation according to Proposition~\ref{thm:trans}, we conclude the location of the pivot point in the hand frame is \begin{align} \vc{p} := [x_p,\, y_p]^T =\dfrac{1}{ \omega_r} [-v_{yr},\, v_{xr}]^T, \label{eq:ppoint} \end{align} where $\vc{\nu}_{rel} = \left[v_{xr},\, v_{yr},\, \omega_r \right]^T $ denotes the relative twist of the patch with respect to the object expressed in $\{\mathrm{H}\}$. In sticking mode, the pivot point is indeterminate and we may choose any point, e.g., the origin of $\{\mathrm{H}\}$. However, at the boundary of pivoting and sticking modes, it is possible to make the pivot point a continuous function by evaluating the limit as $\alpha \to 0$. In view of~\eqref{eq:vrvh}, this is equivalent of substituting~$\vc{\nu}_{rel}$ in~\eqref{eq:ppoint} with \begin{align*} \bar{\vc{\nu}}_{rel} = \mx{B}\hat{\mx{A}}^{-1} \vc{\nu}_h. \end{align*} \subsection{Regions of validity} \label{sec:regVal} If there is $\alpha > 0$ to satisfy~\eqref{eq:solAlpha}, the pivoting mode is active, which implies having a finite pivot point. Otherwise, the wrenches can be calculated to identify which mode is valid. In sticking mode, from the twist of the patch and the fact that the object slides on the surface, we can easily calculate $\vc{w}$ \begin{subequations} \begin{align} \tilde{\vc{\nu}}_h &= -k_1 \vc{w}, \label{eq1:objSlide} \\ 1 &= \vc{w}^T \vc{w}. \label{eq2:objSlide} \end{align} \end{subequations} Then, the sticking mode is valid if the contact between the patch and the object can be sustained by the friction, i.e., \begin{align} \vc{w}^T \mx{\Lambda}\vc{w} < 1. \label{eq:condStickW} \end{align} Subtracting~\eqref{eq2:objSlide} from~\eqref{eq:condStickW} results in \begin{align} \vc{w}^T \mx{C} \vc{w} < 0. \label{eq:stickCond} \end{align} Using~\eqref{eq1:objSlide}, it is possible rewrite the condition as \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} \tilde{\vc{\nu}}_h < 0. \label{eq:cond0Stick} \end{align} Note that whenever $\alpha = 0$, the relative velocity is zero and hence the mode is sticking. Since in this case Equation~\eqref{eq:alpha4vh} degenerates to condition~\eqref{eq:cond0Stick} with an equality sign, we extend the condition to include also its boundary. Accordingly, in sticking mode \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} \tilde{\vc{\nu}}_h \leq 0, \label{eq:condStick} \end{align} or equivalently \begin{align} \vc{\nu}_h^T \hat{\mx{A}}^{-1} \left( \mx{B} - \hat{\mx{A}} \right) \hat{\mx{A}}^{-1}\vc{\nu}_h \leq 0. \label{eq:motionCone} \end{align} Similarly, in slipping mode \begin{subequations} \begin{align} \tilde{\vc{\nu}}_h &= -k_2 \mx{\Lambda} \vc{w}, \label{eq1:handSlide} \\ 1 &=\vc{w}^T \mx{\Lambda}\vc{w}. \label{eq2:handSlide} \end{align} \end{subequations}And the mode is valid if \begin{align} \vc{w}^T \vc{w} < 1. \label{eq:condSlipW} \end{align} Subtracting~\eqref{eq:condSlipW} from~\eqref{eq2:handSlide} results in \begin{align*} \vc{w}^T \mx{C} \vc{w} > 0. \end{align*} Using a similar argument as mentioned before, we extend the boundary to include the case $\alpha \to \infty$ and express the condition using~\eqref{eq1:handSlide} as \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} \mx{\Lambda}^{-2} \tilde{\vc{\nu}}_h = \tilde{\vc{\nu}}_h^T \mx{C} (\mx{C} +\mx{I})^{-2} \tilde{\vc{\nu}}_h \geq 0, \label{eq:condSlip} \end{align} or equivalently \begin{align} \vc{\nu}_h^T \mx{B}^{-1} \left( \hat{\mx{A}} - \mx{B} \right) \mx{B}^{-1} \vc{\nu}_h \geq 0. \end{align} \subsection{Effect of normal forces}\label{sec:normalforce} Our formulation is generic with respect to $\mx{A}$ and $\mx{B}$ describing the limit surfaces, as long as the matrices are positive definite. In fact, both matrices can be time-varying, specifically when the COP of the object does not have a fixed transformation to its COM or when the patch is deforming as a result of variations in normal or tangential forces. For surfaces with homogeneous friction coefficients and symmetrical pressure distributions, with no deformation of contact areas as a result of varying normal forces, the trajectory depends only on the ratio between normal forces at the Hand-Object (HO) and the Object-Environment (OE) contacts, and not their absolute values. The reason is that given these assumptions, the normal forces as well as the friction coefficients can be factorized from $\mx{A}$ and $\mx{B}$, and in the solution only the ratio will appear. Nevertheless, the friction forces will be scaled. In general, when the normal force at the patch is changed, the frictions related to HO and OE are not proportionally changed. Firstly, the lower surface has to additionally support the weight of the object, secondly the pressure distribution may vary and become stronger closer to the patch, and thirdly deformation of the patch may increase its contact area. To exactly model the effect of normal force, it is required to know the pressure distributions and their variation. This is not a simple task as the pressure distribution depends in general on the stiffness of the contact surfaces, geometry of the contact, and relative velocities. Particularly, the friction patch may go through large deformations as a function of normal forces. To get an understanding of the effect of normal force, consider a special case where a flat object and a sphere-shaped soft finger following a Hertzian law are in contact. Denoting the normal force on the sphere by $f_n$, the pressure distribution at radius $r$ is~\citep{johnson_1985} \begin{align} p(r) = p_0\left(1-\dfrac{r^2}{a^2}\right)^{1/2}, \end{align} where \begin{align} p_0 = \dfrac{3}{2\pi a^2} f_n \end{align} and $a$ is the radius of the contact area. Using this pressure distribution, Equation~\eqref{eq:LSInts} allows us to calculate the maximum friction force and torque \begin{align} f_{max} &= \mu f_n, \\ m_{max} &= \mu \dfrac{3 \pi}{16} a f_n. \end{align} By changing the normal force, the radius of the contact area increases according to \begin{align} a = \left(\dfrac{3}{4} \dfrac{R}{E^*} f_n\right)^{1/3}, \end{align} where $R$ is the radius of the sphere and $E^*$ is the effective elastic modulus. As it can be seen, while tangential forces depend linearly on the normal force, the torque has a nonlinear dependence because of the increase in contact area. Accordingly, it is possible to change the ratio of the torsional to tangential friction of the patch. Another observation is that by pressing the patch harder, the COP of the object shifts more toward the patch. Although modeling the exact physical phenomenon is complicated, we can easily incorporate this effect using a computational model. For example, define $s \in \mathbb{R}$ to be a value between zero and one characterizing the percentage of the shift of the COP \begin{align} s = 1 - (c \dfrac{f_n}{m g} + 1)^{-\delta}, \label{eq:COPmodel} \end{align} where $c$ and $\delta$ are model parameters and $m g$ is the weight of the object. Then, if the limit surface at the COP of the object is characterized by $\mx{A}_{COP}$ and~$\vc{r}$ denotes the relative position of the hand frame w.r.t. the object frame, the limit surface at the object frame is \begin{align} \mx{A} = \mx{J}(-s \vc{r}) \mx{A}_{COP} \mx{J}^T(-s \vc{r}). \end{align} A similar approach can be used to compensate for the shift of COPs due to relative velocities. See Appendix~\ref{sec:appx3} for experimental validation of the proposed model for the shift of COP. \subsection{Simulations} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{08_cones_with_w} \caption{Sliding an object with $\omega_h \neq 0$. On the left, the tip of the velocity arrow is inside the motion cone, thus the patch sticks to the object. On the right side, the liner velocity has increased and the object is pivoting against the patch.} \label{fig:cones_w} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{09_sliding2_exp} \caption{Sliding motion experiment: a soft finger is attached to the KUKA LBR iiwa robot. The book is being dragged toward the edge. Trajectories of the center of the object are shown in blue and of the friction patch in red.} \label{fig:sliding_exp} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{12_probes} \caption{Experimental setup. From left to right: robot end-effector with optical markers, spherical and square soft fingers, and the object (book) with optical markers.} \label{fig:probes} \end{center} \end{figure} We present the results of simulations of the model with the parameters specified here. The box dimensions are $15.6 \times 23.6\,$cm. The patch is circular with a radius of $2.0\,$cm. The weight of the box is $450\,$g. The coefficient of friction between the box and the surface and the soft finger and the box are $\mu_{oe} = 0.2$ and $\mu_{ho} = 0.8$, respectively. We assume a uniform pressure distribution between the box and the surface when the box is not pressed and a Hertzian pressure distribution for the soft finger. To account for the shift of COP, we use~\eqref{eq:COPmodel} with $c = 0.6$ and $\delta = 2$. In Figure~\ref{fig:slidingfn}, simulation experiments in which the blue box is being moved from the left to the right by a soft finger are illustrated. The initial placement of the soft finger is $\vc{r} = [-3,\, 7]\,$cm. The end-effector moves at $1\,$cm/s in the $x$ direction. The simulation runs for $50\,$ seconds. Each subplot corresponds to a certain constant normal force. Considering left-right top-bottom ordering, the normal forces are $1.43$, $1.7$, $4$, and $6\,$N, respectively. The trajectory of the patch, pivot point, and object are visualized. Note that for generating Figures~\ref{fig:hLocus},~\ref{fig:ppLocus}, and~\ref{fig:maxRot} presented in previous sections, we have used ${f_n = 2.5\,}$N, $\vc{r} = [-3.5,\, 6]\,$cm, and $f_n = 2.5\,$N, respectively while remaining parameters were set according to the values given in this subsection. When the soft finger is moving with an angular velocity, the mode depends on the magnitude of the linear velocity. Thus, for the same normal force, angular velocity, and direction of velocity of the finger, different modes might arise. This is because of the fact that when $\omega_h \neq 0$, the modes are no longer mapped to planar cones. Instead, the boundaries of the regions in two dimensions can be represented by the intersection of the motion cone described in Theorem \ref{thm:coneReg} with the plane corresponding to the angular velocity $\omega_h$. Figure \ref{fig:cones_w} shows an example where the finger is rotating at $-\pi/80$ rad/s for 40 seconds. On the left side, the linear velocity is chosen so that it is within the sticking region. In this case, the object is rotated by 90$^\circ$. On the right, the linear velocity is slightly increased such that it enters the pivoting region, resulting in a faster rotation of the object, exceeding the 90$^\circ$ rotation by the end of the simulation. \subsection{Robotic experiments} The experimental setup consisted of a KUKA light weight iiwa7 robot arm, with an ATI Gamma force-torque sensor mounted at the wrist. A number of soft fingers were manufactured, and are shown in Figure~\ref{fig:probes}, together with the end-effector of the robot and the object used in the experiments (a hard-cover book). The positions of the robot and of the object were recorded using an Optitrack motion capture system. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{10_estParamsAll} \caption{A sample of poses and forces in pivoting mode during a straight line motion. The three components of each vector are shown in blue, red, and yellow colors, respectively. The normal force at the hand frame has additionally been shown in $\vc{w}_h$ plot in violet. Dashed lines represent the experimental results, which are almost indistinguishable from the simulation.} \label{fig:estParamsAll} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[height=7.0cm]{11_estParamsPath} \includegraphics[height=7.0cm]{11_estParamsQr} \caption{Comparison of simulated and experimental results for a straight line pivoting motion. On the left, the rectangles in blue illustrate simulation, and in dashed black experimental results. On the right, the locus of the origin of the patch frame (red) and the pivot point (cyan) with respect to the object are shown.} \label{fig:estParamsPath} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{14_regions_exp} \caption{Visualization of the sticking and pivoting cones (blue) and slipping and pivoting cones (red) based on $\abs{\omega_o}$ as a result of moving the soft finger with $1\,$cm/s in various directions.} \label{fig:regions_exp} \end{center} \end{figure} \subsubsection{Sliding trajectory} To verify the accuracy of the dynamical system presented in Section~\ref{sec:dynSys}, the soft finger mounted at the robot end-effector was pressed against the object and commanded to move at a certain velocity, while maintaining a constant normal force. An image sequence of one trial is shown in Figure \ref{fig:sliding_exp}, with the trajectories of the center of the book overlaid in blue and of the center of the friction patch in red. An experiment under similar conditions was tested in simulation, calculating the trajectories of the object, finger, and pivot point, and the forces that arise from this interaction. In Figure~\ref{fig:estParamsAll}, the full state of the system and the wrenches at the hand frame as a function of time are shown for both simulation (solid) and experimental (dashed) data. To identify $\mx{A}$, $\mx{B}$, and $s$, i.e., the percentage of the shift of COP due to loading, we set up an optimization problem that minimizes the error between the simulated experiment and the measured data. The hand velocity and the normal forces are chosen as the average of the respective values from the experiments. A comparison between the simulated and the experimental results is shown in Figure~\ref{fig:estParamsPath}. The plot on the left shows the simulated object path in blue and the experimental in dashed black. The plot on the right side of Figure~\ref{fig:estParamsPath} shows the positions of the patch and the pivot point in the object frame. It can be seen that the model accurately describes the sliding motion of the object, reaching similar positions and orientations within the same amount of time. In Figure~\ref{fig:traj_examples}, a number of sample sliding motions are shown. The same parameters, except for the COP of the patch which could vary slightly from an experiment to another, are used for all the simulated results. The prediction of the proposed model matches with the experimental results. \subsubsection{Modes and motion cones} Validation of the motion cones and possible modes was carried out by placing the soft finger at different locations on the book and performing linear motions in various directions, with zero angular velocity, while maintaining a constant normal force. The angular velocity of the object was recorded and is visualized in Figure~\ref{fig:regions_exp} for two different locations. The symmetry presented in these results confirm what is posited in Corollary \ref{cor:revTwist}. The left side of the figure shows the angular velocities when pressing the soft finger with a normal force of $6\,$N. Since the friction is approximately isotropic, according to Theorem~\ref{thm:linMotionStick} the soft finger sticks to the object when moving towards or away from the COP of the object. When moving perpendicularly to this line, the object pivots and has some rotational velocity. The right side shows the same effect with a normal force of $2\,$N. The patch slips against the object when moving along a direction close to the line that passes through the COPs of the object and of the patch and pivots when the velocity is perpendicular to that. \subsubsection{Controlled sliding} \label{sec:contSliding} One of the main observations of the proposed model is that, by regulating the normal force applied by the soft finger, we can modify the trajectory of the object. As discussed in Section~\ref{sec:normalforce}, an increase in the normal force applied on the object through the friction patch slows down the rotation. Given a patch location and a velocity direction, a reference trajectory for $\theta_o$ can be defined, as long as it stays within the calculated limits as in Figure~\ref{fig:maxRot}. Figure \ref{fig:controlExp} illustrates the result of an experiment for tracking a desired trajectory of the object orientation. A force controller was implemented on the robot to realize the proportional control law~\eqref{eq:controller}. This simple proportional controller was able to closely track the reference trajectory, applying larger normal forces to keep the object from rotating, and relaxing the pressure whenever faster rotation was required. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{13_traj_examples} \caption{Samples of sliding motion illustrating paths and motion cones. Blue: modelled; Black: experimental. Red: friction patch; Cyan: pivot point.} \label{fig:traj_examples} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{15_control_exp} \caption{Experimental result of tracking a reference trajectory for the object orientation.} \label{fig:controlExp} \end{center} \end{figure} \section{Proof of Theorem~\ref{thm:decomp}} \label{sec:appx2} \begin{proof} Since matrix $\mx{A}$ is symmetric, it has 6 unique elements. Accordingly, by expanding the right hand side of~\eqref{eq:decomp}, we find 6 equations and 6 unknowns, i.e., the diagonal elements of $\mx{\Lambda}$, the angle of rotation $\theta$ for $\mx{R} := \mx{R}(\theta)$ and the vector $\vc{r} = [x_r,\, y_r]^T$ for the Jacobian $\mx{J} := \mx{J}(\vc{r})$. To show that this equation system can indeed be solved, we construct the solution. Using the elements of $\mx{A}$, we calculate \begin{subequations} \begin{align} x &:= (a_{11} - a_{22} ) + \dfrac{1}{a_{33}} (a_{23}^2 - a_{13}^2 ) \nonumber \\ &\phantom{:}= (\lambda_{1} - \lambda_{2}) \cos(2\theta), \\ y &:= -2 (a_{21} - \dfrac{1}{a_{33}} a_{13} a_{23}) \nonumber \\ &\phantom{:}= (\lambda_{1} - \lambda_{2}) \sin(2\theta). \end{align} \end{subequations} Accordingly, we find \begin{align} \theta = \dfrac{1}{2} \Atan2(x,y) \sgn (\lambda_{1} - \lambda_{2}). \end{align} If $\lambda_{1} \neq \lambda_{2}$, and we decide a specific ordering for the elements of $\mx{\Lambda}$, e.g., $\lambda_{1} > \lambda_{2}$, the angle is uniquely determined. If $\lambda_{1} = \lambda_{2}$, any angle can be chosen including $0$. After finding $\theta$, as an intermediate step we calculate \begin{align} \tilde{\mx{\Lambda}} = \mx{R}(\theta) \mx{A} \mx{R}^T(\theta). \end{align} Now, the elements of $\vc{r}$ are obtained as \begin{align} x_r = \dfrac{\tilde{\Lambda}_{23}}{\tilde{\Lambda}_{33}}, \quad y_r = - \dfrac{\tilde{\Lambda}_{13}}{\tilde{\Lambda}_{33}} \end{align} Finally, \begin{align} \mx{\Lambda} = \mx{J}(-\vc{r}) \tilde{\mx{\Lambda}} \mx{J}^T(-\vc{r}). \end{align} \end{proof} \section{Proof of Theorem~\ref{thm:no2alpha}} \label{sec:appx0} \begin{proof} We prove the theorem by contradiction. Assume that there are two distinct positive solutions $\alpha_2 > \alpha_1 > 0$. We prove that this implies that $\vc{\nu}_h = 0$. Since $\hat{\mx{A}}$ and $\mx{B}$ are positive definite, so is $\mx{\Lambda}$, and hence all $\lambda_i> 0,\, i\in \{1,2,3\}$. The coefficients $c_i$, which are the diagonal of $\mx{C} = \mx{\Lambda} - \mx{I}$, cannot all have the same sign. Otherwise, the left hand side of~\eqref{eq:solAlpha} will be irrespective of $\vc{\nu}_h$ either positive or negative and there will be no solution to the equation. This implies that one or two eigenvalues are less than one, while the rest are/is larger than one. Here, we consider the case where $\lambda_1 > 1 > \lambda_2 > \lambda_3$. Other cases are proven similarly. For $\alpha_i, i\in \{1,2\}$, we have \begin{multline} c_1 \left(\dfrac{\tilde{v}_{xh}}{\alpha_i \lambda_1 + 1} \right)^2 + c_2 \left(\dfrac{\tilde{v}_{yh}}{\alpha_i \lambda_2 + 1} \right)^2 + c_3 \left(\dfrac{\tilde{\omega}_{h}}{\alpha_i \lambda_3 + 1} \right)^2 \\ = 0.\nonumber \end{multline} We multiply the equation associated with $\alpha_i$ by \begin{align} \left(\alpha_i \lambda_1 +1\right)^2 \label{eq:elimL1} \end{align} and subtract the resulting equations from each other to eliminate the first term. Accordingly, \begin{multline} c_2 \tilde{v}^2_{yh} \left(f_{1,2}(\alpha_2) - f_{1,2}(\alpha_1) \right) + \\ c_3 \tilde{\omega}^2_{h} \left(f_{1,3}(\alpha_2) - f_{1,3}(\alpha_1) \right) = 0 \label{eq:noc1Term} \end{multline} where \begin{align} f_{i,j}(\alpha) := \left(\dfrac{\alpha \lambda_i + 1}{\alpha \lambda_j + 1}\right)^2. \label{eq:fija} \end{align} For $\alpha \geq 0$ and positive values of $\lambda_i$, from the derivate of~\eqref{eq:fija} we find out that the function is monotonically increasing or decreasing, depending on the sign of $\lambda_i - \lambda_j$. Thus, given our assumptions about $\lambda_i$, both $f_{1,2}$ and $f_{1,3}$ are increasing functions. Taking this fact into account, the left hand side of~\eqref{eq:noc1Term} is always negative unless both $\tilde{v}_{yh}$ and $\tilde{\omega}_{h}$ are zero. If this is the case, from~\eqref{eq:solAlpha} we conclude that $\tilde{v}_{xh}$ must also be zero. Since $\vc{\nu}_h$ is assumed to be non zero, this completes the proof by contradiction. \end{proof} \section{Proof of Theorem~\ref{thm:unique}} \label{sec:appx} \begin{proof} Firstly, we show that it is impossible to have no active modes, i.e., there is at least one active mode. Secondly, we prove that it is impossible to have any two modes active at the same unless $\vc{\nu}_h = \vc{0}$. Let us define \begin{align} \label{eq:fofab} f(\beta,\gamma):= \vc{w}(\beta, \gamma)^T \mx{C} \vc{w}(\beta, \gamma) \end{align} where $\vc{w}(\beta, \gamma) = \left(\beta \mx{I} + \gamma \mx{\Lambda}\right)^{-1} \tilde{\vc{\nu}}_h$. If neither sticking mode nor slipping mode is possible, from conditions~\eqref{eq:condStick} and~\eqref{eq:condSlip}, we conclude \begin{subequations} \label{eq:noStnoSl} \begin{align} f(\beta,0) &> 0, \\ f(0,\gamma) &< 0. \end{align} \end{subequations} Also define $g(\alpha):= f(1,\alpha)$. According to~\eqref{eq:fofab}, $\sgn f(\beta, \gamma) = \sgn g({\gamma}/{\beta})$. Consequently, conditions~\eqref{eq:noStnoSl} can be written as \begin{align*} g(0) > 0 \end{align*} and for a large enough $\alpha$ \begin{align*} g(\alpha) < 0. \end{align*} Since $g(\cdot)$ is a continuous function, there must exist an $\alpha > 0$ such that $g(\alpha)= 0$, i.e., there is a solution in pivoting mode. Therefore, it is impossible to have no active mode. For ease of reference, here we summarize~\eqref{eq:condStick},~\eqref{eq:alpha4vh}, and~\eqref{eq:condSlip}, which provide the criteria for sticking, pivoting, and slipping modes, respectively \begin{subequations} \label{eq:stickPivotSlip} \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} \tilde{\vc{\nu}}_h &\leq 0, \label{eq:cond1}\\ \alpha >0,\quad \tilde{\vc{\nu}}_h^T \mx{C} (\mx{I} + \alpha \mx{\Lambda})^{-2} \tilde{\vc{\nu}}_h &= 0, \label{eq:cond2}\\\ \tilde{\vc{\nu}}_h^T \mx{C} \mx{\Lambda}^{-2} \tilde{\vc{\nu}}_h &\geq 0. \label{eq:cond3}\ \end{align} \end{subequations} Now assume that any pair of these conditions hold true. We can show that this results in a contradiction unless $\vc{\nu}_h = \vc{0}$. The proof construction is similar to the proof of Theorem~\ref{thm:no2alpha} given in Appendix~\ref{sec:appx0}. More specifically, the term corresponding to the largest eigenvalue is eliminated by multiplying the expressions by proper coefficients similar to~\eqref{eq:elimL1}. Here, we provide the details only for the case where~\eqref{eq:cond1} and~\eqref{eq:cond3} are assumed true. We know that the diagonal elements of $\mx{C} = \mx{\Lambda} - \mx{I}$ cannot have the same sign. Let us assume $\lambda_1 > 1 > \lambda_2 > \lambda_3 > 0$. Accordingly, \begin{multline} \label{eq:slip-stick} \lambda_1^2 \tilde{\vc{\nu}}_h^T \mx{C} \mx{\Lambda}^{-2} \tilde{\vc{\nu}}_h - \tilde{\vc{\nu}}_h^T \mx{C} \tilde{\vc{\nu}}_h = \\ c_2 \tilde{v}^2_{yh} \Big(\left(\tfrac{\lambda_1}{\lambda_2}\right)^2 - 1 \Big) + c_3 \tilde{\omega}^2_{h} \Big(\left(\tfrac{\lambda_1}{\lambda_3}\right)^2 - 1 \Big) \leq 0 \end{multline} Unless $\tilde{v}_{yh}$ and $\tilde{\omega}_{h}$ are zero,~\eqref{eq:slip-stick} is strictly negative. However, to fulfill~\eqref{eq:cond1} if $\tilde{v}_{yh} = \tilde{\omega}_{h} = 0$, $\vc{\nu}_h$ must also be zero since $c_1 > 0$. Thus, we conclude that if $\vc{\nu}_h \neq 0$, then \begin{align} \tilde{\vc{\nu}}_h^T \mx{C} \tilde{\vc{\nu}}_h > \lambda_1^2 \tilde{\vc{\nu}}_h^T \mx{C} \mx{\Lambda}^{-2} \tilde{\vc{\nu}}_h, \end{align} which contradicts the assumption that the left hand side is less than or equal to zero and the right hand side is greater or equal to zero. Other scenarios for $\lambda_i$ are proven similarly. \end{proof} \section{Experimental validation of shift of COP} \label{sec:appx3} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{16_force_plate} \caption{Experimental results of the effect of normal force in the shift of the object COP towards the patch.} \label{fig:copExp} \end{figure} To understand the effect of loading an object with a given normal force, in terms of the amount of displacement of the COP of the object towards the COP of the patch, a number of experiments were carried out. We used a BTS Force Plate, which measures forces and centers of pressure. The objects were placed on the surface and pressed with an increasing normal force. The shift $s$ (in percentage) is plotted in Figure~\ref{fig:copExp}, against the normal force (normalized for object weight) for two different objects: a hardcover book of 463$\,$g, and a flat steel slab of 1593$\,$g. Both objects presented similar behaviors, despite the differences in material properties. The computational model proposed in \eqref{eq:COPmodel} was used to fit the experimental data, and the resulting parameters were ${c=0.6}$, $\delta=2.0$ for the book and $c = 0.9642$, $\delta = 1.324$ for the metal slab. \section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \newcommand{\mathrm d}{\mathrm d} \newcommand{\hspace{.5pt}\mathrm j}{\hspace{.5pt}\mathrm j} \newcommand{{\mathrm{j}\omega}}{{\mathrm{j}\omega}} \newcommand{{\mathrm{i}\omega}}{{\mathrm{i}\omega}} \newcommand{\abs}[1]{\ensuremath{\left\lvert#1\right\rvert}} \newcommand{\norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\mx}[1]{\mathbf{\bm{#1}}} \newcommand{\vc}[1]{\mathbf{\bm{#1}}} \newcommand{\RN}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \newcommand{\pder}[2]{\frac{\partial#1}{\partial#2}} \newcommand{\der}[2]{\frac{\mathrm{d}#1}{\mathrm{d}#2}} \newcommand{\,\mathrm{d}t}{\,\mathrm{d}t} \newcommand{\,\mathrm{d}A}{\,\mathrm{d}A} \DeclareMathOperator{\sgn}{sign} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\blkdiag}{blkdiag} \DeclareMathOperator{\Atan2}{Atan2} \DeclareMathOperator{\rad}{rad} \DeclareMathOperator{\trace}{tr} \DeclareMathOperator{\sat}{sat} \newtheorem{thm}{Theorem \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}{Corollary} \newtheorem{conj}{Conjecture \newtheorem{defn}{Definition \newtheorem{exmp}{Example \newtheorem{rem}{Remark} \title{Quasi-static Analysis of Planar Sliding Using Friction Patches} \usepackage{xwatermark} \newwatermark[firstpage,color=gray!90,angle=0,scale=0.28, xpos=0in,ypos=-5in]{*correspondence: \texttt{mahdi.ghazaei@control.lth.se}} \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author[1\thanks{\tt{mahdi.ghazaei@control.lth.se}}]{M. Mahdi Ghazaei Ardakani} \author[2]{Joao Bimbo} \author[3]{Domenico Prattichizzo} \affil[1,2,3]{Istituto Italiano di Tecnologia (IIT), Italy} \affil[3]{Department of Information Engineering, University of Siena, Italy} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \maketitle \begin{abstract} Planar sliding of objects is modeled and analyzed. The model can be used for non-prehensile manipulation of objects lying on a surface. We study possible motions generated by frictional contacts, such as those arising between a soft finger and a flat object on a table. Specifically, using a quasi-static analysis we are able to derive a hybrid dynamical system to predict the motion of the object. The model can be used to find fixed-points of the system and the path taken by the object to reach such configurations. Important information for planning, such as the conditions in which the object sticks to the friction patch, pivots, or completely slides against it are obtained. Experimental results confirm the validity of the model for a wide range of applications. \end{abstract} \keywords{Planar sliding, non-prehensile manipulation, soft finger, frictional contact} \vspace{0.35cm} \end{@twocolumnfalse} ] \section{Introduction}\label{sec:intro} \input{01_intro} \section{Modelling}\label{sec:slidingModel} \input{02_sliding} \section{Dynamical system}\label{sec:dynSys} \input{03_dynSys} \section{Properties of the solution} \input{04_solProp} \section{Approximate solution}\label{sec:approxSol} \input{05_approxSol} \section{Strategies for sliding} \input{06_movePrim} \section{Experiments and results} \input{07_results} \section{Discussion} \input{08_disc} \section{Conclusion} \input{09_conclusion} \footnotesize \section*{Acknowledgements} The research has received funding from the SOMA project (European Commission, Horizon 2020 Framework Programme, H2020-ICT-645599). \normalsize
{ "timestamp": "2019-04-16T02:16:28", "yymm": "1904", "arxiv_id": "1904.06677", "language": "en", "url": "https://arxiv.org/abs/1904.06677" }
\section{Introduction} In complete active space second-order perturbation (CASPT2) theory,\cite{Andersson1990JPC,Andersson1992JCP,Pulay2011IJQC,Roosbook2016} one has to use some form of denominator shifts in order to avoid the so-called intruder states. To see the role of the shifts, let us briefly recapitulate the CASPT2 theory. The CASPT2 theory is formulated as a minimization problem of the Hylleraas functional, \begin{align} E^{(2)} = \langle \Phi^{(0)} | \hat{T}^\dagger (\hat{H}^{(0)} - E^{(0)} ) \hat{T} |\Phi^{(0)} \rangle + 2 \langle \Phi^{(0)} | \hat{T}^\dagger \hat{H} |\Phi^{(0)} \rangle, \label{hyll} \end{align} where the excitation operator is defined as \begin{align} \hat{T} = \sum_\Omega T_\Omega \hat{E}_\Omega. \end{align} $\hat{E}_\Omega$ is the standard spin-free excitation operator with $\Omega$ being the excitation manifold. The stationary point with respect to the amplitude $T_\Omega$ can be found by solving \begin{align} \sum_{\Omega'} \langle \Omega | (\hat{H}^{(0)} - E^{(0)} ) |\Omega' \rangle T_{\Omega'} + \langle \Omega | \hat{H} |\Phi^{(0)} \rangle = 0. \end{align} where we introduced $|\Omega\rangle = \hat{E}_\Omega |\Phi^{(0)} \rangle $ for brevity. Suppose $\omega$ be an orthogonal basis in the expansion space $\Omega$ that diagonalizes the first term, namely $\langle \omega | (\hat{H}^{(0)} - E^{(0)} ) |\omega' \rangle = \delta_{\omega\omega'}\Delta_{\omega} $ (note, however, that this basis is not formed in CASPT2 in practice); then, this equation can be formally solved as \begin{align} T_{\omega} = -\frac{\langle \omega | \hat{H} |\Phi^{(0)} \rangle}{\Delta_{\omega}}, \label{gentamp} \end{align} which can be used to calculate the second-order energy $E^{(2)}$. The intruder state problem stems from the fact that $\Delta_{\omega}$ sometimes vanishes, which leads to divergence of the second-order energies. To regularize this divergence, several schemes have been developed. The real shift\cite{Roos1995CPL} modifies the denominator by $1/\Delta \to 1/(\Delta + \epsilon)$ and adds the so-called shift correction using the first-order perturbation theory. This results in the expression: \begin{align} \frac{1}{\Delta}\to \frac{\Delta}{(\Delta + \epsilon)^2} \approx \left\{ \begin{array}{ll} \displaystyle \frac{\Delta}{\epsilon^2} - \frac{2\Delta^2}{\epsilon^3} & \Delta \ll 1 \\[8pt] \displaystyle \frac{1}{\Delta}-\frac{2\epsilon}{\Delta^2} & \Delta \gg 1 \end{array} \right. \label{realaa} \end{align} As one can see, when $\Delta$ is small, the contribution is suppressed by $\epsilon$ in the denominator; when $\Delta$ is large, it approaches $1/\Delta$. Our previous CASPT2 nuclear gradient works\cite{MacLeod2015JCP,Vlaisavljevich2016JCTC,Park2017JCTC,Park2017JCTC2} have been based on this real shift scheme. The imaginary shift\cite{Forsberg1997CPL} replaces the denominator $1/\Delta$ with $\Re [1/(\Delta + i\epsilon)]$. As pointed out in the original work, this regularization is more attractive than the real shift, especially for excited-state calculations, because the imaginary shift approach is guaranteed to be singularity free (owing to the fact that there is no pole away from the real axis). When the shift correction is included, this is equivalent to the following regularization, \begin{align} \frac{1}{\Delta}\to \frac{\Delta(\Delta^2 + 2\epsilon^2)}{(\Delta^2 + \epsilon^2)^2} \approx \left\{ \begin{array}{ll} \displaystyle \frac{\Delta}{\epsilon^2} - \frac{3\Delta^3}{\epsilon^4} & \Delta \ll 1 \\[8pt] \displaystyle \frac{1}{\Delta}-\frac{\epsilon^4}{\Delta^5} & \Delta \gg 1 \end{array} \right. \label{imagaa} \end{align} It should be noted that, away from the singularity, the imaginary shift scheme has an error of the order of $(\epsilon/\Delta)^4$, whereas the real shift scheme has an error of the order of $(\epsilon/\Delta)$. This also supports the superiority of the imaginary shift over the real shift. The regularization by the imaginary shift can also be justified by the comparison to other regularization schemes that are energy dependent.\cite{Evangelista2014JCP,Lee2018JCTC} For instance, the driven similarity renormalization group (DSRG) of Evangelista and co-workers,\cite{Evangelista2014JCP,Li2015JCTC} though derived from a completely different perspective, can be considered to be a form of regularization for CASPT2 (at least) at second order. In DSRG, the divergence is regularized by \begin{align} \frac{1}{\Delta}\to \frac{1-e^{-\Delta^2/\epsilon^2}}{\Delta} \approx \left\{ \begin{array}{ll} \displaystyle \frac{\Delta}{\epsilon^2} - \frac{\Delta^3}{2\epsilon^4} & \Delta \ll 1 \\[8pt] \displaystyle \frac{1}{\Delta} -\frac{e^{-\Delta^2/\epsilon^2}}{\Delta} & \Delta \gg 1 \end{array} \right. \end{align} The small $\Delta$ limit is very similar to the imaginary shift, while the dumping is exponential in the large $\Delta$ limit. Though there is a small difference, the similarity between the imaginary shift and DSRG provides another evidence for the effectiveness of the imaginary shift scheme. Motivated by these observations, we extend in this work the CASPT2 nuclear gradient theory to include the imaginary shift. The theory, algorithm, working equations, and numerical results are presented in the following. All of the computer programs are implemented in the {\sc bagel} program package, which is publicly available under the GNU General Public License.\cite{bagel,Shiozaki2018WIREs} \section{Theoretical Backgrounds} In this section, we briefly review the CASPT2 theory with the imaginary shift, first introduced in Ref.~\onlinecite{Forsberg1997CPL}. Hereafter $i$, $j$, $k$, and $l$ label closed orbitals, $r$, $s$, $t$, and $u$ label active orbitals, $a$, $b$, $c$, and $d$ denote virtual orbitals, and $x$, $y$, $z$, and $w$ label general orbitals, respectively. $\Omega$ and $\tilde{\Omega}$ are redundant (non-orthogonal) and orthogonal two-electron excitation manifolds. \subsection{CASPT2-D Energy Evaluation with the Imaginary Shift} Since the working equations for CASPT2 with imaginary shift are somewhat complicated, we first start with a simpler form of CASPT2 that uses the zeroth-order Hamiltonian without off-diagonal couplings, $\hat{H}^{(0)}_D$ (called CASPT2-D).\cite{Andersson1990JPC} The CASPT2-D perturbative amplitude in the orthogonal basis $\tilde{\Omega}$ is formally \begin{align} \mathcal{T}_{\tilde{\Omega}} & = - \frac{\langle \tilde{\Omega} | \hat{H} |\Phi^{(0)} \rangle }{\Delta_{\tilde{\Omega}} + i\epsilon}. \end{align} Note that we choose $\tilde{\Omega}$ such that they diagonalize the zeroth-order Hamiltonian in each excitation subspace, with $\Delta_{\tilde{\Omega}}$ being the associated eigenvalues [see discussions above Eq.~\eqref{gentamp}]. The real part of the amplitudes is \begin{align} T_{\tilde{\Omega}} = \Re \left ( \mathcal{T}_{\tilde{\Omega}} \right) & = - \frac{\langle \tilde{\Omega} | \hat{H} |\Phi^{(0)} \rangle \Delta_{\tilde{\Omega}}}{\Delta_{\tilde{\Omega}}^2 + \epsilon^2} \label{tamp0} \end{align} which is to be substituted into the Hylleraas functional [Eq.~\eqref{hyll}] to arrive at the CASPT2-D energy expression with the shift corrections, \begin{align} E^{(2)} = - \sum_{\tilde{\Omega}} \frac{\Delta_{\tilde{\Omega}} (\Delta_{\tilde{\Omega}}^2 + 2 \epsilon^2)}{(\Delta_{\tilde{\Omega}}^2 + \epsilon^2)^2} | \langle {\tilde{\Omega}} | \hat{H} |\Phi^{(0)} \rangle |^2. \label{imagenergy} \end{align} It is known\cite{Forsberg1997CPL} that the same perturbative amplitudes can be obtained by variationally minimizing the following functional, \begin{align} \langle \Phi^{(1)}| \hat{H}^{(0)} - E^{(0)} + \frac{\epsilon^2}{\Delta_{\tilde{\Omega}}}|\Phi^{(1)}\rangle + 2\langle \Phi^{(1)} | \hat{H} |\Phi^{(0)} \rangle, \label{zeromod} \end{align} with $|\Phi^{(1)}\rangle = \hat{T} | \Phi^{(0)}\rangle$. Taking the derivative with respect to $T_{\tilde{\Omega}}$ and setting it equal to zero, one obtains the amplitude equation, \begin{align} \langle \tilde{\Omega} | \hat{H}^{(0)} - E^{(0)} + \frac{\epsilon^2}{\Delta_{\tilde{\Omega}}} | \tilde{\Omega}\rangle T_{\tilde{\Omega}} + \langle\tilde{\Omega} | \hat{H} |\Phi^{(0)} \rangle = 0, \end{align} from which one recovers Eq.~\eqref{tamp0}. We can further rewrite this equation as follows using $\tilde{T}_{\tilde{\Omega}} = T_{\tilde{\Omega}}/\Delta_{\tilde{\Omega}}$, \begin{align} \langle \tilde{\Omega} | \Delta_{\tilde{\Omega}}(\hat{H}^{(0)} - E^{(0)}) + \epsilon^2 | \tilde{\Omega}\rangle \tilde{T}_{\tilde{\Omega}} + \langle\tilde{\Omega} | \hat{H} |\Phi^{(0)} \rangle = 0, \end{align} which is explicitly non-singular. This formulation is amenable to nuclear gradient formulations and will be used later. To clarify the procedure for obtaining the orthogonal configurations and denominators in the above, let us take an excitation class $\hat{E}_{ar,bs}$ as an illustrative example ($\{ar,bs\} \in \Omega$). Applying this operator to the reference configuration generates excited configurations that are not orthogonal to each other. The overlap and zeroth-order Hamiltonian matrix elements between these configurations are \begin{subequations} \begin{align} & \langle \Omega_{ar,bs} | \Omega_{ct,du} \rangle = \delta_{ac} \delta_{bd} \mathcal{S}_{rs,tu}, \\ & \langle \Omega_{ar,bs} | \hat{H}^{(0)}- E^{(0)} | \Omega_{ct,du} \rangle \nonumber\\ &\quad = \delta_{ac} \delta_{bd} \left[\mathcal{F}_{rs,tu} + (f_{aa} + f_{bb} - E^{(0)})\mathcal{S}_{rs,tu}\right] , \\ & \mathcal{S}_{rs,tu} = \Gamma^{(2)}_{rt,su},\\ & \mathcal{F}_{rs,tu} = \sum_{vw} \Gamma^{(3)}_{rt,su,vw} f_{vw}, \end{align} \end{subequations} where $\Gamma^{(n)}$ is an $n$-particle reduced density matrix (RDM) of the reference wave functions. One can find $V^T_{rs}$ that simultaneously satisfies the following conditions, \begin{subequations} \label{orthogonalcond} \begin{align} &\sum_{rstu} V^{T}_{rs} \mathcal{S}_{rs,tu} V^{U}_{tu} = \delta_{TU}, \\ &\sum_{rstu} V^{T}_{rs} \mathcal{F}_{rs,tu} V^{U}_{tu} = \delta_{TU} \phi_{T}, \end{align} \end{subequations} where $\phi_T$ is the $T$-th eigenvalue of the Fock matrix in this basis. The denominator in the orthogonal basis for $\tilde{\Omega}_{ab, T}$ ($\{ab,T\}\in \tilde{\Omega}$) then reads \begin{align} \Delta_{ab,T} = f_{aa} + f_{bb} + \phi_T - E_0 \label{DabT} \end{align} The denominators for other excitation subspaces are similarly defined. The substitution of these expressions into Eq.~\eqref{imagenergy} yields the CASPT2-D energy with imaginary shift. \subsection{CASPT2 Energy Evaluation with Imaginary Shift\label{energysec}} When the off-diagonal elements are included in the zeroth-order Hamiltonian, as in the standard CASPT2 method, the amplitude equation has to be solved iteratively. When the imaginary shift is included, the Hylleraas functional is to be modified according to Eq.~\eqref{zeromod}, and the amplitude equation for $\tilde{T}_{\tilde{\Omega}}$ becomes \begin{align} 0 &= \langle\tilde{\Omega}| \Delta_{\tilde{\Omega}} (\hat{H}^{(0)} - E^{(0)}) + \epsilon^2|\tilde{\Omega}\rangle \tilde{T}_{\tilde{\Omega}} \nonumber \\ & + \sum_{\tilde{\Omega}^\prime\neq \tilde{\Omega}} \langle \tilde{\Omega} | \hat{H}^{(0)} |\tilde{\Omega}^\prime \rangle \Delta_{\tilde{\Omega}'}\tilde{T}_{\tilde{\Omega}^\prime} + \langle \tilde{\Omega} | \hat{H} |\Phi^{(0)} \rangle. \label{bbb} \end{align} Note that this can only be defined in the orthogonal basis $\tilde{\Omega}$, because the the shift expression is dependent on the denominator $\Delta_{\tilde{\Omega}}$. In our implementation, we first evaluate the residual vector, $\sigma$, without the shift term, in the space of the redundant basis $\Omega$, \begin{align} \sigma_{\Omega} = \sum_{\Omega'} \langle \Omega | \hat{H}^{(0)} - E^{(0)} | \Omega' \rangle T_{\Omega' }+ \langle \Omega | \hat{H} | \Phi^{(0)} \rangle. \label{aaa} \end{align} The perturbative amplitudes in the redundant [appearing in Eq.~\eqref{aaa}] and orthogonal [appearing in Eq.~\eqref{bbb}] subspaces are related with each other as \begin{align} T_{\Omega} = \sum_{\tilde{\Omega}} \tilde{T}_{\tilde{\Omega}} \Delta_{\tilde{\Omega}} V_{\tilde{\Omega}}^{\Omega}. \end{align} $V$ is defined as in Eq.~\eqref{orthogonalcond}. Note that $V$ is block diagonal with respect to the excitation classes and is independent of the virtual indices. The residual vector in the redundant basis is then projected to the orthogonal subspace; after adding the imaginary shift contribution, it reads \begin{align} \sigma_{\tilde{\Omega}}^\prime = \sum_{\Omega} \sigma_{\Omega} V_{\tilde{\Omega}}^{\Omega} + \epsilon^2 \tilde{T}_{\tilde{\Omega}},\label{ImagContribution} \end{align} using which we update the amplitudes in the orthogonal basis as \begin{align} \Delta \tilde{T}_{\tilde{\Omega}}& = - \frac{ \sigma_{\tilde{\Omega}}^\prime}{\Delta_{\tilde{\Omega}}^2 + \epsilon^2}. \end{align} This procedure is repeated until convergence is achieved. At convergence, we compute the second-order energy by inserting $T_\Omega$ in the Hylleraas functional, Eq.~\eqref{hyll}. This procedure computes energies that implicitly include the shift corrections. \subsection{Multistate Extensions of the Imaginary Shift} In the extended multistate CASPT2 theory, XMS-CASPT2\cite{Finley1998CPL,Granovsky2011JCP,Shiozaki2011JCP3,Vlaisavljevich2016JCTC} with the so-called SS-SR contraction scheme, a correlated basis state ($| \Phi_L^{(1)} \rangle$) for reference state $L$ is generated in a similar procedure to that for the state-specific CASPT2 theory above, \begin{align} | \Phi_L^{(1)} \rangle = \hat{T}_{L} | \tilde{L} \rangle = | \tilde{\Omega}_L \rangle T_{L,\tilde{\Omega}}. \end{align} In the case of the so-called MS-MR contraction, this equation is to be modified; however, in the following, we omit the working equations for the MS-MR contraction for brevity, though they are derived and implemented into efficient code as well. The first-order wave function for physical state $P$ is then formed as a linear combination of correlated basis states, \begin{align} | \Psi_P^{(1)} \rangle = \sum_L |\Phi^{(1)}_L\rangle R_{LP}. \end{align} Note that, in XMS-CASPT2, we use the so-called XMS reference states $| \tilde{L} \rangle$ that diagonalize the Fock operator in the model space, as first proposed for an uncontracted variant, XMCQDPT.\cite{Granovsky2011JCP} The unitary matrix elements, $R_{LP}$, are determined by the diagonalization of an effective Hamiltonian $H_\mathrm{eff}$ whose elements are \begin{align} H_{\mathrm{eff},LL^\prime} & = \langle \tilde{L} | \hat{H} | \tilde{L}^\prime \rangle + \langle \tilde{L} | \hat{T}_{L}^\dagger \hat{H} | \tilde{L}^\prime \rangle + \delta_{LL^\prime }\langle \tilde{L} | \hat{T}^\dagger_{L} \hat{H} |\tilde{L} \rangle \nonumber \\ & + \delta_{LL^\prime } \langle \tilde{L} | \hat{T}^\dagger_{L} ( \hat{H}^{(0)} - E_L^{(0)} ) \hat{T}_{L} | \tilde{L} \rangle. \end{align} The shift is included through the perturbative amplitudes. The XMS-CASPT2 energy expression for the $P$-th state with the imaginary shift is \begin{align} E_{P} & = \sum_{MN} \langle \tilde{M} | \hat{H} | \tilde{N} \rangle R_{MP} R_{NP} + \sum_{LN} R_{LP} R_{NP} \langle \tilde{L} | \hat{T}_{L}^\dagger \hat{H} | \tilde{N} \rangle\nonumber\\ &+ \sum_{L} R_{LP}^2 \left(\langle \tilde{L} | \hat{T}^\dagger_{L} \hat{H} |\tilde{L} \rangle + \langle \tilde{L} | \hat{T}^\dagger_{L} ( \hat{H}^{(0)} - E_L^{(0)} ) \hat{T}_{L} | \tilde{L} \rangle\right). \end{align} Unlike XMS-CASPT2 with real shift, the last term that accounts for the shift correction has to be included explicitly. \section{Nuclear Gradient Theory for CASPT2 with Imaginary Shift} \subsection{CASPT2 Lagrangian with Imaginary Shift} The CASPT2 energy with imaginary shift is not stationary with respect to the amplitudes. The CASPT2 part of the Lagrangian is defined as \begin{align} \mathcal{L}_{\mathrm{PT2},P} & = E_{P} + \sum_{L,\tilde{\Omega}} \lambda_{L,\tilde{\Omega}} \sigma_{L,\tilde{\Omega}} + \sum_{L,\tilde{\Omega}} \Lambda_{L,\tilde{\Omega}} \left[ \Delta_{L,\tilde{\Omega}} - f_{L,\tilde{\Omega}} \right], \label{lagpt2} \end{align} where $f_{L,\tilde{\Omega}}$ is the explicit form of $\Delta_{L,\tilde{\Omega}}$, e.g., the right-hand side of Eq.~\eqref{DabT}. Note that $\Delta_{L,\tilde{\Omega}}$ is treated as a parameter that is constrained to a particular value as seen in the last term of Eq.~\eqref{lagpt2}. This allows us to make the Lagrangian linear in the Hamiltonian, which in turn allows for straightforward definition of the relaxed density matrices. The stationary condition with respect to the perturbation amplitude, the so-called $\lambda$-equation, is obtained by differentiating this Lagrangian with respect to the amplitude $T_{L}$, as \begin{align} 0 & = \langle \tilde{\Omega}_L | \Delta_{L,\tilde{\Omega}}(\hat{H}^{(0)} - E_L^{(0)}) + \epsilon^2 | \tilde{\Omega}_L \rangle \tilde{\lambda}_{L,\tilde{\Omega}} \nonumber \\ & + \sum_{\tilde{\Omega}_L^\prime\neq \tilde{\Omega}_L} \langle \tilde{\Omega}_L | \hat{H}^{(0)} |\tilde{\Omega}_L^\prime \rangle \Delta_{L,\tilde{\Omega}'}\tilde{\lambda}_{L,\tilde{\Omega}^\prime} + \sum_{N} R_{NP} \langle \tilde{\Omega}_L | \hat{H} | \tilde{N} \rangle \nonumber \\ & + R_{LP} \left[\langle \tilde{\Omega}_L | \hat{H} |\tilde{L} \rangle + 2 \langle \tilde{\Omega}_L | ( \hat{H}^{(0)} - E_L^{(0)} ) \hat{T}_{L} | \tilde{L} \rangle \right] \label{Lambda_now} \end{align} whose last term is not present in the $\lambda$-equation for CASPT2 with the real shift in the previous reports.\cite{Shiozaki2011JCP3,Vlaisavljevich2016JCTC} This new term arises because, when including the imaginary shift, we have to use the explicit Hylleraas functional for the diagonal elements of the effective Hamiltonian. The difference is compensated later such that the nuclear gradients remain identical in the limit of $\epsilon=0$. In addition, by taking a derivative of $\mathcal{L}_{\mathrm{PT2},P}$ with respect to $\Delta_{L,\tilde{\Omega}}$ and setting it to zero, one obtains \begin{align} \Lambda_{L,\tilde{\Omega}} = \epsilon^2 \tilde{\lambda}_{L,\tilde{\Omega}} \tilde{T}_{L,\tilde{\Omega}}. \label{LambdaOmega} \end{align} With these procedures, $\mathcal{L}_{\mathrm{PT2},P}$ is now stationary with respect to all of the parameters, namely $T$, $\lambda$, $\Delta_{L,\tilde{\Omega}}$, and $\Lambda_{L,\tilde{\Omega}}$. Next, we consider the conditions associated with constructing the orthogonal basis functions. The Lagrangian is augmented to account for the fact that the Fock operator is diagonal in the orthogonal basis $\tilde{\Omega}$ [for instance, Eq.~\eqref{orthogonalcond}] as \begin{align} \mathcal{L}_{\mathrm{imag},P} & = \mathcal{L}_{\mathrm{PT2},P} + \tr \left[ \mathbf{\bar{z}} \left( \mathbf{V}^\dagger \boldsymbol{\mathcal{F}}\mathbf{V} - \boldsymbol{\phi} \right) \right] \nonumber\\ &- \tr \left[ \mathbf{\bar{X}} \left( \mathbf{V}^\dagger \boldsymbol{\mathcal{S}}\mathbf{V} - \mathbf{1} \right) \right].\label{lagimag} \end{align} where $\boldsymbol{\phi}$ is a diagonal matrix whose elements are $\phi_T$. The diagonal elements of $\bar{\mathbf{z}}$ are obtained by the stationary condition for $\mathcal{L}_{\mathrm{imag},P}$ with respect to variation of $\phi_T$. For example, $\bar{z}_{TT}$ for the configurations $\tilde{\Omega} \in \{ab,T\}$ is \begin{align} \bar{z}_{TT} = - \sum_L \sum_{ab} \Lambda_{L,ab,T}. \end{align} The remaining elements of $\bar{\mathbf{X}}$ and $\bar{\mathbf{z}}$ are determined by differentiating the Lagrangian with respect to the non-redundant set of parameters for $\mathbf{V}$. We do so by introducing a unitary rotation $\mathbf{W}$, i.e., \begin{align} \mathbf{V} = \mathbf{V}^{0} \mathbf{W}. \end{align} Then, the multipliers are \begin{align} \bar{z}_{TU} & = -\frac{1}{2} \frac{\bar{Y}_{TU} - \bar{Y}_{UT}}{\phi_T - \phi_U}, \nonumber \\ \bar{X}_{TU} & = \frac{1+\tau_{TU}}{4} \left(\bar{Y}_{TU} + 2 \bar{z}_{TU} \phi_T \right), \end{align} where $\tau_{TU}$ permutes the indices $T$ and $U$. $\bar{Y}_{TU}$ is the derivatives of $\mathcal{L}_{\mathrm{imag},P}$ with respect to $\mathbf{W}$, whose explicit expression for the configurations $\tilde{\Omega} \in \{ab,T\}$ is \begin{align} \bar{Y}_{TU} & = \epsilon^2 \sum_L \sum_{ab} \tilde{\lambda}_{L,ab,T} \tilde{T}_{L,ab,U} \left(\Delta_{L,ab,T} - \Delta_{L,ab,U}\right). \end{align} With these Lagrange multipliers $\lambda$, $\bar{z}$, and $\bar{X}$, $\mathcal{L}_{\mathrm{imag},P}$ is stationary with respect to any variation of $T$, $\Delta$, and $V$. The terms associated with the use of the orthogonal basis vanish, as expected, when the shift parameter $\epsilon$ is zero or when the real shift is used, since $\bar{Y}$ becomes zero in these cases (see the Supporting Information). \subsection{$Z$-vector Equation} The total Lagrangian, which is to be made stationary with respect to the CI and molecular orbital (MO) coefficients in the CASSCF procedure, reads\cite{Celani2003JCP,Shiozaki2011JCP3} \begin{align} \mathcal{L} & = \mathcal{L}_{\mathrm{imag},P} \nonumber \\ & + \frac{1}{2} \tr \left[ \mathbf{Z} \left( \mathbf{A}-\mathbf{A}^\dagger \right) \right] - \frac{1}{2} \tr \left[ \mathbf{X} \left( \mathbf{C}^\dagger \mathbf{SC} - \mathbf{1} \right) \right] \nonumber \\ & + \sum_N W_N \left[ \sum_{I} z_{I,N} \langle I | \hat{H} - E_N^{\mathrm{ref}} | N \rangle - \frac{1}{2} x_N \left( \langle N | N \rangle - 1 \right) \right] \nonumber \\ & + \sum_i^{\mathrm{closed}}\sum_{j \neq i}^{\mathrm{closed}} z_{ij}^{c} f_{ij} + \sum_a^{\mathrm{virtual}}\sum_{b \neq a}^{\mathrm{virtual}} z_{ab}^{c} f_{ab} +\sum_{MN} w_{MN} \langle \tilde{M} | \hat{f} | \tilde{N} \rangle. \end{align} Here, $\mathbf{A}$ is an orbital gradient matrix in CASSCF, $\mathbf{S}$ is an overlap integral in the atomic orbital basis, $\mathbf{C}$ is the MO coefficients. $N$ labels CASSCF states, whose wave function is $|N\rangle$. $W_N$ is the weight used in the state averaging scheme, and $I$ labels Slater determinants in the active space. The terms in the second and third lines define the conditions for the CASSCF wave functions. The remaining terms account for the fact that the Fock matrix in the MO basis is diagonal in the closed and virtual orbital spaces (including the frozen core approximation) and for the condition associated with the XMS rotations.\cite{Shiozaki2011JCP3} The multipliers $\mathbf{Z}$, $\mathbf{z}$, and $\mathbf{X}$ can be obtained by solving the so-called $Z$-vector equation.\cite{Celani2003JCP,Shiozaki2011JCP3,Vlaisavljevich2016JCTC} The source terms for the $Z$-vector equations are the derivatives of $\mathcal{L}_{\mathrm{imag},P}$ with respect to the orbital rotation parameters $\kappa_{xy}$ and the CI coefficients $c_{I,N}$; they are \begin{align} Y_{xy} & = \frac{\partial \mathcal{L}_{\mathrm{imag},P}}{\partial \kappa_{xy}}, \\ y_{I,N} & = \frac{\partial \mathcal{L}_{\mathrm{imag},P}}{\partial c_{I,N}}. \end{align} To evaluate these terms, it is convenient to rewrite $\mathcal{L}_{\mathrm{imag},P}$ in a form that separates the terms dependent on the molecular integrals and RDMs, those that are only dependent on the RDMs, and those that are independent of the molecular integrals or RDMs. The rewritten expression is \begin{align} \mathcal{L}_{\mathrm{imag},P} & = \tr \left( \mathbf{hd} \right) + \tr \left[ \mathbf{g} \left(\mathbf{d}^{(0),\mathrm{SA}}\right) \mathbf{d}^{(2)} \right] + \sum_{kl} \tr \left( \mathbf{K}^{kl} \mathbf{D}^{lk} \right) \nonumber \\ & + \sum_{L} \sum_{n=1}^{3} \tr \left( \mathbf{e}^{(n)S,LL} \mathbf{\Gamma}^{(n),LL} \right) + 2 \sum_{\tilde{\Omega}} \Lambda_{\tilde{\Omega}} \Delta_{\tilde{\Omega}}- \tr \left( \mathbf{\bar{z}} \boldsymbol{\phi} - \mathbf{\bar{X}} \right).\label{lagrangian} \end{align} Here we use the following notations for molecular integrals \begin{subequations} \begin{align} \left[\mathbf{g}(\mathbf{d})\right]_{xy} & = \sum_{kl} \left[ (xy|zw) d_{zw} - \frac{1}{4}(xw|zy) (d_{zw} + d_{wz}) \right],\\ \mathbf{K}^{zw}_{xy} & = (xz|yw), \end{align} \end{subequations} and for the RDMs \begin{subequations} \begin{align} &\Gamma^{(1),LL}_{rs} = \langle \tilde{L} | \hat{E}_{rs} | \tilde{L} \rangle, \\ &\Gamma^{(2),LL}_{rs,tu} = \langle \tilde{L} | \hat{E}_{rs,tu} | \tilde{L} \rangle, \\ &\Gamma^{(3),LL}_{rs,tu,vw} = \langle \tilde{L} | \hat{E}_{rs,tu,vw} | \tilde{L} \rangle, \\ &d^{(0),\mathrm{SA}}_{rs} = \sum_{L} W_L \Gamma^{(1),LL}_{rs}. \end{align} \end{subequations} The density-like terms $\mathbf{e}^{(1)S}$ to $\mathbf{e}^{(3)S}$ arise from the overlap of the redundant basis, $\boldsymbol{\mathcal{S}}$, as \begin{align} \mathbf{e}^{(n)S,MM} = - \sum_{TU} \bar{X}_{TU} \frac{\partial }{\partial \mathbf{\Gamma}^{(n),MM}} \left( \mathbf{V}^\dagger \boldsymbol{\mathcal{S}} \mathbf{V} \right)_{TU}. \end{align} For example, the contribution from $\tilde{\Omega}\in\left\{ ab,T \right\}$ is \begin{align} e^{(2)S,MM}_{rs,tu} = - \sum_{TU} \bar{X}_{TU} V^{T}_{rt,M} V^{U}_{su,M}.\label{ens} \end{align} The terms that do not depend on the molecular integrals or RDMs contribute to neither $Y_{xy}$ nor $y_{I,N}$. The total one-electron and two-electron density matrices, $\mathbf{d}$ and $\mathbf{D}$, are \begin{subequations} \begin{align} \mathbf{d} & = \mathbf{d}^{(0)} + \mathbf{d}^{(1)} + \mathbf{d}^{(2)}, \\ \mathbf{D} & = \mathbf{D}^{(0)} + \mathbf{D}^{(1)}, \end{align} \end{subequations} where the superscripts denote perturbation order. The zeroth- and first-order contributions are \begin{subequations} \begin{align} d^{(0)}_{xy} & = \sum_{LN} R_{LP} R_{NP} \langle \tilde{L} | \hat{E}_{xy} | \tilde{N} \rangle , \\ D^{(0)}_{xyzw} & = \sum_{LN} R_{LP} R_{NP} \langle \tilde{L} | \hat{E}_{xyzw} | \tilde{N} \rangle , \\ d^{(1)}_{xy} & = \sum_{LN} R_{LP} R_{NP} \langle \tilde{L} | \hat{T}_{L}^\dagger \hat{E}_{xy} | \tilde{N} \rangle \nonumber \\ & + \sum_L R_{LP}^2 \langle \tilde{L} | \hat{T}^\dagger_{L} \hat{E}_{xy} |\tilde{L} \rangle + \langle \tilde{L} | \hat{\lambda}_{L}^\dagger \hat{E}_{xy} | \tilde{L} \rangle, \\ D^{(1)}_{xyzw} & = \sum_{LN} R_{LP} R_{NP} \langle \tilde{L} | \hat{T}_{L}^\dagger \hat{E}_{xyzw} | \tilde{N} \rangle \nonumber \\ & + \sum_L R_{LP}^2 \langle \tilde{L} | \hat{T}^\dagger_{L} \hat{E}_{xyzw} |\tilde{L} \rangle + \langle \tilde{L} | \hat{\lambda}_{L}^\dagger \hat{E}_{xyzw} | \tilde{L} \rangle. \end{align} \end{subequations} The second-order contributions to the correlated density matrix can be divided into three components, \begin{align} \mathbf{d}^{(2)} &= \mathbf{d}^{(2)}_{TT} + \mathbf{d}^{(2)}_{T \lambda} + \mathbf{d}^{(2)}_{\mathrm{shift}},\label{d2tot} \end{align} where $\mathbf{d}^{(2)} = \mathbf{d}^{(2)}_{T \lambda}$ in the CASPT2 nuclear gradient theory for the real shift.\cite{Shiozaki2011JCP3,Vlaisavljevich2016JCTC} The additional terms for the imaginary shift compensate the difference in the $\lambda$-equation. The first two terms are \begin{subequations} \begin{align} \bar{d}^{(2)}_{TT,xy} &= \sum_{L} R_{LP}^2 \langle \tilde{L} | \hat{T}^\dagger_{L} \hat{E}_{xy} \hat{T}_{L} | \tilde{L} \rangle,\label{dtdl1}\\ \bar{d}^{(2)}_{ T\lambda,xy} &= \sum_{L}\langle \tilde{L} | \hat{T}^\dagger_{L} \hat{E}_{xy} \hat{\lambda}_{L} | \tilde{L} \rangle,\label{dtdl} \\ d^{(2)}_{TT,xy} &= \left\{ \begin{array}{ll} \displaystyle \bar{d}^{(2)}_{TT,xy} - \sum_L N_L^{TT} \langle \tilde{L} | \hat{E}_{xy} | \tilde{L} \rangle & {x,y} \in {r,s} \\[8pt] \displaystyle \bar{d}^{(2)}_{TT,xy} & \mathrm{otherwise} \end{array} \right. \\ d^{(2)}_{ T\lambda,xy} &= \left\{ \begin{array}{ll} \displaystyle \bar{d}^{(2)}_{T\lambda,xy} - \sum_L N_L^{\lambda T} \langle \tilde{L} | \hat{E}_{xy} | \tilde{L} \rangle & {x,y} \in {r,s} \\[8pt] \displaystyle \bar{d}^{(2)}_{T\lambda,xy} & \mathrm{otherwise} \end{array} \right. \end{align} \end{subequations} in which we used \begin{subequations} \begin{align} &N_L^{TT} = R_{LP}^2 \langle \tilde{L} | \hat{T}_{L}^\dagger \hat{T}_{L} | \tilde{L}\rangle, \\ &N_L^{\lambda T} = \langle \tilde{L} | \hat{\lambda}_{L}^\dagger \hat{T}_{L} | \tilde{L}\rangle. \end{align} \end{subequations} The last term, $\mathbf{d}^{(2)}_{\mathrm{shift}}$, arises from the zeroth-order Hamiltonian $\boldsymbol{\mathcal{F}}$. For example, the contributions from $\tilde{\Omega} \in \left\{ ab,T \right\}$ to $\mathbf{d}^{(2)}_\mathrm{shift}$ is \begin{subequations} \begin{align} d^{(2)}_{\mathrm{shift},aa} & = -\sum_{b,T} \Lambda_{ab,T} \\ d^{(2)}_{\mathrm{shift},bb} & = -\sum_{a,T} \Lambda_{ab,T} \\ \bar{d}^{(2)}_{\mathrm{shift},rs} & = \sum_{L}\sum_{TU} \sum_{tu,vw} \bar{z}_{TU} V_{tv,L}^{T} \Gamma^{(3),LL}_{tu,vw,rs} V_{uw,L}^{U}. \end{align} \end{subequations} The zeroth-order energy in the denominator is taken account by defining a norm-like quantity, \begin{align} & N^{\mathrm{shift}}_{L} = -\sum_{\tilde{\Omega}} \Lambda_{\tilde{\Omega}}^L, \\ & d^{(2)}_{\mathrm{shift},xy} = \left\{ \begin{array}{ll} \displaystyle \bar{d}^{(2)}_{\mathrm{shift},xy} - \sum_L N_L^{\mathrm{shift}} \langle \tilde{L} | \hat{E}_{xy} | \tilde{L} \rangle & {x,y} \in {r,s} \\[8pt] \displaystyle \bar{d}^{(2)}_{\mathrm{shift},xy} & \mathrm{otherwise} \end{array} \right. \label{dshift} \end{align} Since $\Lambda$ and $\bar{z}$ involve both $\lambda$ and $T$ [Eq.~\eqref{LambdaOmega}], $\mathbf{d}^{(2)}_\mathrm{shift}$ is also a second-order contribution. The correlated density matrices are then used to evaluate $Y_{xy}$, as elaborated in Ref.~\onlinecite{Celani2003JCP}. \begin{figure*}[tb] \includegraphics[width=0.8\linewidth]{01_molecules.png} \caption{Optimized geometry of (a) adenine (b) $p-$HBDI$^-$ and (c) FeP (imaginary $\epsilon$ = 0.20 $E_\mathrm{h}$). Graphic created with IboView.\cite{Knizia2013JCTC,Knizia2015ACIE} \label{figure:01}} \end{figure*} Similarly, the CI derivatives can be divided into four components, \begin{align} \tilde{y}_{I,M} & = \frac{\partial \mathcal{L}_{\mathrm{imag},P}}{\partial \tilde{c}_{I,M}} \nonumber \\ & = \tilde{y}_{I,M}^\mathrm{(0)+(1)}+ \tilde{y}_{I,M}^{T\lambda} + \tilde{y}_{I,M}^{TT} + \tilde{y}_{I,M}^\mathrm{shift}\label{ytot}. \end{align} The counterpart in the CASPT2 nuclear gradient theory with the real shift is $\tilde{y}_{I,M} = \tilde{y}_{I,M}^\mathrm{(0)+(1)} + \tilde{y}_{I,M}^{T\lambda}$.\cite{Shiozaki2011JCP3,Vlaisavljevich2016JCTC} The first two terms are \begin{subequations} \begin{align} & \tilde{y}_{I,M}^\mathrm{(0)+(1)} = \sum_{N} R_{MP} R_{NP} \left( 2 \langle I | \hat{H} | \tilde{N}\rangle + \langle I | \hat{T}_{M}^\dagger \hat{H} | \tilde{N}\rangle + \langle \tilde{N} | \hat{T}_{M}^\dagger \hat{H} | I \rangle \right) \nonumber \\ & \quad\quad + \langle \tilde{M} | \hat{\lambda}^\dagger_{M} \hat{H} | I \rangle + \langle I | \hat{\lambda}^\dagger_{M} \hat{H} | \tilde{M} \rangle, \\ & \tilde{y}_{I,M}^{T\lambda} = \langle \tilde{M} | \hat{\lambda}_{M}^\dagger (\hat{H}^{(0)} - E_M^{(0)}) \hat{T}_{M} | I \rangle + \langle I | \hat{\lambda}_{M}^\dagger (\hat{H}^{(0)} - E_M^{(0)}) \hat{T}_{M} | \tilde{M} \rangle \nonumber \\ & \quad\quad + 2 \sum_{rs} \langle I | \hat{E}_{rs} | \tilde{M} \rangle \left[ W_M \mathbf{g} ( \mathbf{d}^{(2)}_{T\lambda}) - N_M^{\lambda T} \mathbf{f} \right]_{rs}, \end{align} \end{subequations} and the additional terms are \begin{subequations} \begin{align} & \tilde{y}_{I,M}^{TT} = R_{MP}^2 \left( \langle I | \hat{T}_{M}^\dagger \hat{H}|\tilde{M} \rangle + \langle \tilde{M}| \hat{T}_{M}^\dagger \hat{H} | I \rangle \right) \nonumber \\ & \quad\quad+ 2 R_{MP}^2 \langle I | \hat{T}_{M}^\dagger (\hat{H}^{(0)} - E_M^{(0)}) \hat{T}_{M} | \tilde{M} \rangle \nonumber \\ & \quad\quad + 2 \sum_{rs} \langle I | \hat{E}_{rs} | \tilde{M} \rangle \left[ W_M \mathbf{g} ( \mathbf{d}^{(2)}_{TT}) - N_M^{TT} \mathbf{f} \right]_{rs}, \\ & \frac{1}{2} \tilde{y}_{I,M}^{\mathrm{shift}} = \sum_{rs} \langle I | \hat{E}_{rs} | \tilde{M}\rangle \left[ W_M \mathbf{g} \left( \mathbf{d}^{(2)}_\mathrm{shift} \right) - N^\mathrm{shift}_M \mathbf{f} \right]_{rs} \nonumber \\ &\quad\quad + \sum_{\tilde{\Omega} \tilde{\Omega}^\prime} \tilde{\lambda}_{M,\tilde{\Omega}} \tilde{T}_{M,\tilde{\Omega}^\prime} \frac{\partial S_{\tilde{\Omega} \tilde{\Omega}^\prime}}{\partial \tilde{c}_{I,M}} \epsilon^2 \Delta_{\tilde{\Omega}^\prime} \nonumber \\ &\quad\quad + \sum_{rs} e^{(1),MM}_{rs} \langle I | \hat{E}_{rs} | \tilde{M} \rangle \nonumber \\ &\quad\quad + \sum_{rs,tu} e^{(2),MM}_{rs,tu} \langle I | \hat{E}_{rs,tu} | \tilde{M} \rangle \nonumber \\ & \quad\quad + \sum_{rs,tu,vw} e^{(3),MM}_{rs,tu,vw} \langle I | \hat{E}_{rs,tu,vw} | \tilde{M} \rangle \nonumber \\ &\quad\quad + \sum_{rs,tu,vw} e^{(4),MM}_{rs,tu,vw} \sum_{xy} \langle I | \hat{E}_{rs,tu,vw,xy} | \tilde{M} \rangle f_{xy}. \label{yshift} \end{align} \end{subequations} The density-like terms $\mathbf{e}^{(n)}$ are \begin{align} \mathbf{e}^{(n)} = \mathbf{e}^{(n)S} + \mathbf{e}^{(n)F}, \end{align} where $\mathbf{e}^{(n)S}$ is defined in Eq.~\eqref{ens}, and $\mathbf{e}^{(n)F}$ is \begin{align} \mathbf{e}^{(n)F,MM} = \sum_{TU} \bar{z}_{TU} \frac{\partial }{\partial \mathbf{\Gamma}^{(n),MM}} \left( \mathbf{V}^\dagger \boldsymbol{\mathcal{F}} \mathbf{V} \right)_{TU}. \end{align} Note that $\mathbf{e}^{(n)F}$ does not appear in Eq.~\eqref{lagrangian}, as it also depends on the molecular integrals. For example, the contribution from $\tilde{\Omega} \in \left\{ ab,T \right\}$ to $\mathbf{e}^{(3)F}$ is \begin{align} e^{(3)F,MM}_{rs,vw,tu} & = \sum_{TU} \bar{z}_{TU} V^T_{rv,M} V^U_{sw,M} f_{tu}, \end{align} The working expressions for $\mathbf{d}^{(2)}$ and $\mathbf{e}$ in all other subspaces are compiled in the Supporting Information. The Lagrange multipliers $w_{MN}$ and $z^c$ are then evaluated using the procedure described in the previous works\cite{Celani2003JCP,Shiozaki2011JCP3,Vlaisavljevich2016JCTC} as \begin{subequations} \begin{align} w_{MN} & = -\frac{1}{2} \frac{1}{E_M^{(0)} - E_N^{(0)}} \sum_I \left( \tilde{c}_{I,M} \tilde{y}_{I,N} - \tilde{c}_{I,N} \tilde{y}_{I,M} \right), \\ z^c_{ij} & = - \frac{1}{2} \frac{Y_{ij} - Y_{ji}}{f_{ii}- f_{jj}}, \\ z^c_{ab} & = - \frac{1}{2} \frac{Y_{ab} - Y_{ba}}{f_{aa}- f_{bb}}. \end{align} \end{subequations} Finally, the $Z$-vector equation is solved using $Y$ and $y$ as the source terms. \begin{figure} \includegraphics[width=\linewidth]{02.png} \caption{Excitation energies (eV) for $p-$HBDI$^-$. Plot of (a) vertical excitation energy of $\mathrm{S}_1$, (b) adiabatic excitation energy of $\mathrm{S}_1$--$\mathrm{S}_0$ and (c) the difference in energy between the conical intersection and the Franck--Condon point for the $P$ conformer and (d) that for the $I$ conformer. \label{figure:02}} \end{figure} \section{Numerical Examples} The numerical results for the imaginary shift formalism are presented in the following subsections. Geometry optimizations were performed using XMS-CASPT2 as implemented in the {\sc bagel} program with both real and imaginary shifts for comparison. Calculations on adenine and the deprotonated form of 4-hydroxybenzylidene-1,2-dimethylimidazolinone ($p-$HBDI$^-$) were performed with cc-pVDZ\cite{Dunning1989JCP} and the corresponding density-fitting basis set. SVP\cite{Schafer1992JCP} and the associated fitting basis was used for iron (II) porphyrin (FeP). Calculations performed on $p-$HBDI$^-$ were performed with an active space consisting of four electrons in three orbitals (4\textit{e}, 3\textit{o}). We used a minimal active spaces of (4\textit{e}, 4\textit{o}) for adenine. The inorganic porphyrin complex FeP was optimized for the low spin singlet state with scalar relativistic effects using the Douglas--Kroll--Hess Hamiltonian.\cite{Douglas1974,Hess1986PRA} We employed an active space of (10\textit{e}, 9\textit{o}) for FeP, which is a minimal active space for metal porphyrin--ligand binding,\cite{Jensen2005JInorgBiochem,Falahati2018NatComm} which includes five metal 3$\textit{d}$ orbitals and four Gouterman orbitals on the ligand.\cite{Gouterman1959JCP,Gouterman1961JMS} The optimized structures for all of the molecules in this study are depicted in Fig.~\ref{figure:01}. \subsection{Accuracy} \begin{figure}[t] \includegraphics[width=\linewidth]{03_surfaces.png} \caption{Pictorial representation of $\mathrm{S}_0$ and $\mathrm{S}_1$ surfaces. a, b, c, and d corresponds to the panels in Fig.~\ref{figure:02}. \label{figure:03}} \end{figure} $p-$HBDI$^-$ is an anionic green fluorescent protein model chromophore (Fig.~\ref{figure:01}). We computed the vertical excitation energy at the S$_0$ geometry, the adiabatic excitation energy, and the energies at the conical intersection points relative to the Frank--Condon point with various values of real and imaginary shifts. The results are shown in Fig.~\ref{figure:02}. The schematic representation of each of the quantities is presented in Fig.~\ref{figure:03}. The $\mathrm{S_1}$ vertical excitation energies of $p-$HBDI$^-$ are shown in Fig.~\ref{figure:02}(a) as a function of shift value. The computed energies increase with increasing shift value, because less correlation is included in the calculation at larger shift values.\cite{Roos1996JMS} However the results obtained with the imaginary shift are less sensitive to variation of the shift parameters than those with the real shift. This must be ascribed to the quartic behavior of the error in the imaginary shift approach whereas the asymptotic behavior with the real shift has a linear dependence on the shift parameter [see Eq.~\eqref{realaa} and \eqref{imagaa}]. For very small shift values near zero ($\textless$ 0.04 $E_\mathrm{h}$), the presence of intruder states is apparent. By fitting the data in the range of $\epsilon=0.05-0.20~E_\mathrm{h}$, the extrapolated excitation energy is found to be 2.40 (2.39) eV using the imaginary (and real) shift. If one chooses a small shift parameter, for example, $\epsilon= 0.05~E_\mathrm{h}$, the results from real (2.42 eV) and imaginary (2.41 eV) shift calculations quantitatively match with each other. However, with a practical values of $\epsilon$ that are commonly used to avoid the intruder state problem,\cite{Roos1996JMS} for instance, $\epsilon$ = 0.20 $E_\mathrm{h}$, the vertical excitation energy is computed to be 2.51 and 2.44 eV with the real and imaginary shifts, respectively. Given that the expected value is $\approx$ 2.40 eV, the relative error with the imaginary shift (40 meV) is less than half of that with the real shift (110 meV). The vertical excitation energy computed with $\epsilon$ = 0.40 $E_\mathrm{h}$ gives 2.64 eV and 2.56 eV for the real and imaginary shifts, respectively. At this value of $\epsilon$, the error due to the real shift is comparable to the intrinsic accuracy of the CASPT2 model.\cite{Schreiber2008JCP} The adiabatic excitation energies [Fig.~\ref{figure:02}(b)], based on the geometry optimization of both $\mathrm{S}_0$ and $\mathrm{S}_1$ states similarly show linear and quartic for the real and imaginary shifts, respectively. For calculations with the shift values between 0.005 and 0.040 $E_\mathrm{h}$, convergence was not met due to an intruder in the $\mathrm{S}_2$ state during the geometry optimization on the $\mathrm{S}_1$ surface. The extrapolated adiabatic excitation energy using the same procedure as above with the real and imaginary shift results is found to be 2.31 and 2.30 eV. With $\epsilon = 0.20~E_\mathrm{h}$ that is commonly used in practical calculations, the adiabatic excitation energy is 2.42 and 2.36 eV for the real and imaginary shifts, respectively, which means that the error with the imaginary shift (60 meV) is roughly half the error with the real shift (120 meV) calculation. The same trend holds for larger values of $\epsilon$. Figures~\ref{figure:02}(c) and (d) show the energies at the conical intersections between the $\mathrm{S}_1$ and $\mathrm{S}_0$ surfaces relative to the $\mathrm{S}_1$ energy at the Franck--Condon point. We considered both the phenoxy ($P$) and imidazolinone ($I$) twisted conformers of $p-$HBDI$^-$. For the $P$ conformer, the resulting energies remain constant with respect to the shift parameters owing to fortuitous error cancellation, in both imaginary and real shift cases. The sensitivity of the result is, however, apparent for the $I$ conformer [Fig.~\ref{figure:02}(d)]; for the small values of $\epsilon$ between 0.05--0.20 $E_\mathrm{h}$, the results computed with imaginary shifts are nearly constant, whereas with the real shift the results decrease linearly. For larger shift values above 0.20 $E_\mathrm{h}$, the results are diverging at the nearly the same rate. At $\epsilon = 0.20~E_\mathrm{h}$, the results with the real and imaginary shifts are $-0.18$ and $-0.14$~eV, which are to be compared with the extrapolated values $-0.10$ and $-0.11$ eV. \begin{table}[t] \caption{Root-mean-square deviation ($\AA$) of $p-$HBDI$^-$ $\mathrm{S}_0$ geometry relative to that computed with imaginary $\epsilon$ = 0.20 $E_\mathrm{h}$. \label{table:01}} \begin{ruledtabular} \begin{tabular}{ccc} $\epsilon$ ($E_\mathrm{h}$) & Real & Imaginary \\ \hline 0.00 & 0.00135 & 0.00139 \\ 0.01 & 0.00135 & 0.00754 \\ 0.10 & 0.00054 & 0.00051 \\ 0.20 & 0.00132 & ------ \\ 0.30 & 0.00278 & 0.00028 \\ 0.40 & 0.00432 & 0.00078 \\ 0.60 & 0.00770 & 0.00278 \\ 0.80 & 0.01102 & 0.00584 \\ 1.00 & 0.01418 & 0.01003 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table*}[tb] \caption{Wall times in seconds for representative steps in XMS-CASPT2 nuclear gradient evaluation. The timing was measured using 16 nodes of a SandyBridge cluster purchased in 2012 (Xeon E5-2650 2.00GHz, total of 256 CPU cores). \label{table:02}} \begin{ruledtabular} \begin{tabular}{cccccrrrrrrr} System & Atoms / Electrons & Basis\footnotemark[1] & CAS & States & \multicolumn{1}{c}{Amp.\footnotemark[2]} & \multicolumn{1}{c}{$\lambda$\footnotemark[3]} & \multicolumn{1}{c}{Den. (shift)\footnotemark[4]} & \multicolumn{1}{c}{CI deriv.} & \multicolumn{1}{c}{$Z$ vector} &\multicolumn{1}{c}{Total\footnotemark[5]} \\ \hline \multicolumn{11}{c}{Real shift}\\ adenine & 15 / 70 & 165 (815) & (4\textit{e}, 4\textit{o}) & 5 & 20 & 18 & 6.2 (--) & 19 & 4.6 & 84 \\ $p-$HBDI$^-$ & 27 / 114 & 279 (1373) & (4\textit{e}, 3\textit{o}) & 3 & 59 & 49 & 25 (--) & 74 & 14 & 257 \\ FeP & 37 / 186 & 427 (2288) & (10\textit{e}, 9\textit{o}) & 5 & 744 & 325 & 297 (--) & 879 & 160 & 2947 \\ \multicolumn{11}{c}{Imaginary shift}\\ adenine & 15 / 70 & 165 (815) & (4\textit{e}, 4\textit{o}) & 5 & 22 & 21 & 7.4 (0.8) & 20 & 4.5 & 87 \\ $p-$HBDI$^-$ & 27 / 114 & 279 (1373) & (4\textit{e}, 3\textit{o}) & 3 & 61 & 56 & 26 (0.8) & 75 & 14 & 260 \\ FeP & 37 / 186 & 427 (2288) & (10\textit{e}, 9\textit{o}) & 5 & 700 & 336 & 729 (383) & 1028 & 196 & 3387 \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{The number of basis functions. The numbers in parentheses are the number of auxiliary functions.} \footnotetext[2]{Total time for the CASPT2 amplitude equation, see Sec.~\ref{energysec}.} \footnotetext[3]{Total time for the CASPT2 Lambda equation [Eq.~\eqref{Lambda_now}].} \footnotetext[4]{Time for computing the correlated density matrices (which includes timing for $\tilde{y}^\mathrm{shift}$). The numbers in parentheses are the time for computing $\mathbf{d}^{(2)}_{\mathrm{shift}}$ and $\tilde{y}^\mathrm{shift}_{I,M}$ [Eqs.~\eqref{dshift}~and~\eqref{yshift}], which are unique to the CASPT2 nuclear gradients with the imaginary shift.} \footnotetext[5]{Total wall time for a geometry optimization step, which includes the time for CASSCF and CASPT2 energy evaluation and computation of MO integrals and reference RDMs.} \end{table*} A similar trend in error is observed for the optimized geometries. To illustrate, Table \ref{table:01} lists the root-mean-square deviation (RMSD) in $\AA$ngstr{\"o}m for the $\mathrm{S}_0$ geometry computed at various shift parameters. We used the geometry calculated with the imaginary shift $\epsilon$ = 0.20 $E_\mathrm{h}$ as a reference. The structural differences with the real or imaginary shift, of various values, are under a hundredth of an $\AA$ngstr{\"o}m except for when $\epsilon$ is taken to be as large as 1.00 $E_\mathrm{h}$. The RMSD tends to increase with increasing shift values, but more slowly with imaginary shift. \subsection{Timing} The computational cost of geometry optimization with imaginary shift was assessed for adenine, $p-$HBDI$^-$, and FeP and compared against that with the real shift. To make the comparison consistent, all of the calculations were performed using 16 nodes of a Xeon E5-2650 cluster (SandyBridge 2.0~GHz, 32 CPUs/256 CPU cores, purchased in 2012). All of the timing calculations were performed using $\epsilon$ = 0.20 $E_\mathrm{h}$. Our implementation does not exploit spatial symmetry of molecules. The results are compiled in Table \ref{table:02}. When the active spaces are small, the wall times for calculating the CASPT2 nuclear gradients with real and imaginary shifts were found to be essentially identical. For instance, one geometry optimization step for adenine with CAS(4$e$, 4$o$) took 84 and 87 seconds, respectively, using the real and imaginary shifts. The same held for the geometry optimization of $p$--HBDI$^-$ with CAS(4$e$, 3$o$), which took 257 and 260 seconds, respectively. Of these timings, roughly 20--25\% of time was spent for CASPT2 energy evaluation, 20\% for solving the $\lambda$-equation, and 25--30\% for computing the CI derivatives. The rest was due to the computation of correlated density matrices and solution of the $Z$-vector equation. When a large active space was used, however, the difference in the computational costs became noticeable, though the difference was still minor. For example, a geometry optimization step for FeP with CAS(10$e$, 9$o$) took 2947 and 3387 seconds with the real and imaginary shifts, respectively, indicating that the nuclear gradient evaluation with imaginary shift for this case was 15\% more expensive than the real shift counterpart. The timing difference between the real and imaginary shift cases is mainly ascribed to the computational cost required for evaluating the imaginary shift terms, $\mathbf{d}^{(2)}_\mathrm{shift}$ in Eq.~\eqref{d2tot} and $\tilde{y}_{I,M}^\mathrm{shift}$ in Eq.~\eqref{ytot}, which roughly scales $O(N_\mathrm{act}^9)$ (see the Supporting Information). Therefore, as the number of the active orbitals increases, the additional cost of evaluating these terms is expected to be more pronounced. For FeP, the wall time for computing the correlated density matrices with the imaginary shift was 729 seconds, among which the imaginary shift term was responsible for 383 seconds (52\%). This is in contrast to the adenine and $p-$HBDI$^-$ cases where the times of computing $\mathbf{d}^{(2)}_\mathrm{shift}$ and $\tilde{y}_{I,M}^\mathrm{shift}$ were less than a second, constituting only a fraction of the time for computing the correlated density matrices. Compared to the $\mathbf{d}^{(2)}_\mathrm{shift}$ and $\tilde{y}_{I,M}^\mathrm{shift}$, the computational cost for the other additional terms was found only marginal. The last term in Eq.~\eqref{Lambda_now} requires only one additional evaluation of a residual-like term at the beginning of the $\lambda$-iteration. As a consequence, the total time for the $\lambda$-equation only slightly increased with imaginary shift, compared to that with real shift, by 3, 7, and 11 seconds for adenine, $p-$HBDI$^-$, and FeP, respectively. Evaluation of yet another additional terms, $\mathbf{d}^{(2)}_{TT}$ in Eq.~\eqref{d2tot} and $\tilde{y}_{I,M}^{TT}$ in Eq.~\eqref{ytot}, can be combined with the conventional terms; therefore, the increase in the computational cost due to these terms was found to be not significant either; they took 0.4, 1, and 49 seconds for adenine, $p-$HBDI$^-$, and FeP, respectively. \section{Conclusions} We have derived and implemented the nuclear gradients for CASPT2 with the imaginary shift by extending the CASPT2 nuclear gradient code for the real shift. The numerical results for the vertical and adiabatic excitation energies and the energy differences between the conical intersections and the Frank--Condon point for $p-$HBDI$^-$ showed that the results were less sensitive to variation of imaginary shift values compared to those with real shift. When small active spaces were used, the additional cost for computing CASPT2 nuclear gradients with imaginary shift was found to be marginal. In a calculation for FeP for which a larger active space [CAS(10$e$, 9$o$)] was used, we observed that the wall times with imaginary shift were roughly 15\% more than that with real shift. The difference has been shown to be due to the computation of correlated density-like quantities that are associated with the imaginary shift terms. The programs have been interfaced to the {\sc bagel} package, which is publicly available for use in chemical applications. \section{Acknowledgments} This work has been in part supported by National Science Foundation [ACI-1550481 (JWP) and CHE-1351598 (TS)]. RA-S has been supported by Air Force Office of Scientific Research (AFOSR FA9550-18-1-0252). JWP has also been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1C1C1003657). \section{Supporting Information} Complete set of equations, including working expressions for $\mathbf{d}^{(2)}$ and $\mathbf{e}$, and tables that compile the raw data for Fig.~\ref{figure:02}.
{ "timestamp": "2019-04-16T02:18:43", "yymm": "1904", "arxiv_id": "1904.06718", "language": "en", "url": "https://arxiv.org/abs/1904.06718" }
\section{Introduction} The rational Calogero model (with and without an external Harmonic trap) and its various generalizations is one of the most well studied integrable system in physics and mathematics \cite{calogero1969ground,calogero1975exactly,calogero1969_1,calogero1971solution, moser1976three,polychronakos2006physics,polychronakos1992exchange}. In its traditional form, it describes $N$ identical non-relativistic particles in one dimension, interacting through two-body inverse-square potentials in the presence of an external harmonic potential \cite{calogero1971solution,mainmanas,CalogeroMosermodel}. This model and its various extensions appear in many branches of physics and mathematics and has connections and relevance to fractional statistics, fluid mechanics, spin chains, gauge theories, matrix models to name a few \cite{olshanetsky1981classical,polychronakos2006physics,jevicki1980quantum,sakita1985quantum, polychronakos1995waves,jevicki376nonperturbative,hikami1993integrable, minahan1993integrable,polychronakos1993lattice,polychronakos1994exact}. As a result, it has been studied extensively \cite{sutherland2004beautiful, olshanetsky1983quantum, olshanetsky1981classical,perelomov1990integrable}. The rational Calogero-Moser model is a model with power-law interaction where every particle is coupled to every other particle. This is therefore considered as a relatively long-ranged model although, in 1D, it might have few properties similar to that of relatively short-ranged models \cite{sutherland1975exact}. It was also shown that models with inverse square interactions in 1D, for some physical quantities, have similarity with short-ranged models \cite{kulkarni2011cold}. \\ Given that, in most physical realizations, one has confined particles interacting with each other over some length scales, it would be of importance to study systems that remain integrable \cite{polychronakos1992quartic,perelomov1990integrable} even in confined potentials. For e.g., a recent breakthrough in cold atomic systems has been the realization of an almost uniform gas of atoms confined in a box-like potential \cite{gaunt2013bose}. One of the main challenges is to find integrable models that continue to remain integrable even when they are confined in external potentials. Hence, we are looking for integrable systems with two properties: (i) Inter-particle interactions where every particle interacts with every other particle over some length-scale and (ii) Strong confinement after some legnth scale. Therefore, we study the model described below which has inter-particle interactions over some length scale, strong external confinement and the rich property of classical integrability. In addition to these properties, the model we consider has the rich property of duality and a field theory formulation. \\ In this paper, we investigate the behaviour of a classical system in a confining potential, interacting through an inverse square sine-hyperbolic potential. This can be viewed as a generalization of the rational Calogero-Moser model which is now periodic on the imaginary line. This is called the Hyperbolic Calogero (HC) Model.\\ The general form of Hamiltonian for the HC model reads, \begin{eqnarray} {\cal H}=\sum_{i=1}^{N}\left[\frac{p_{i}^{2}}{2}+V(x_i)+\sum_{i,\,j\neq i}^{N}\frac{1}{2L^{2}}\left(\frac{g^{2}}{\sinh^{2}\left(\frac{x_{i}-x_{j}}{L}\right)}\right)\right] \label{gham} \end{eqnarray} where \begin{eqnarray} \label{vx} V(x_{i})=a_{1}\cosh\left(\frac{2x_{i}}{L}\right)+b_{1}\sinh\left(\frac{2x_{i}}{L}\right)+a_{2}&&\cosh\left(\frac{4x_{i}}{L}\right)+b_{2}\sinh\left(\frac{4x_{i}}{L}\right)\nonumber \\ \end{eqnarray} where $x_{j}$ are the coordinates of the particles, $p_{j}$ are their canonical momenta, $L$ is a length scale associated with the model, $g$ is the coupling constant and $N$ is the number of particles. We take the mass of the particles to be unity. The above model is classically integrable \cite{polychronakos1992new}. Apart from the interaction between particles, another basic difference between rational and hyperbolic model is in the structure of the external potential. In the hyperbolic case, the external potential is essentially flat within a certain region from the origin and rises steeply after that thus acting like a confinement for the particles moving in it. The size of the system turns out to be $L_c \sim L\sinh^{-1} \Big(\sqrt \frac{g N}{A}\Big)$ where $A$ is the strength of the external confining potential (to be discussed later). Particles are confined within this length. We find that the length of the system essentially scales logarithmically (hence, very slowly) with number of particles which is very different from the rational model where particles spread out as their number increases (length in rational case scales as $\sqrt{N}$). This creates a major difference in the particle density profile.\\ In Section \ref{dhc}, we formulate the HC Hamiltonian with an external potential from a first order equation involving dual variables. We formulate the initial position and velocity distributions of particles for obtaining soliton solutions. This means finding a special set of initial conditions $\{ x_i (0), p_i(0)\}$ where $i=1,2... N$ such that when the particles move under the influence of the Hamiltonian, the density profile of the particles is a robust moving soliton. We find multi-solitons in this integrable system. This is done by solving the damping equation that we explain later. Then, we studied the dynamics of the particles for these special set of initial conditions by solving differential equations numerically. We examined the integrals of motion which provides information about the integrability of the system. We also checked the effects of two, three and four soliton collisions. We examined the effects of the various parameters on the background density (density without formation of any soliton). We have also checked the effects of quenching the parameters on the soliton motion. \\ In Section 3, we formulate the equations of motion for the dual variables and analyze their trajectories in the complex plane. We also check the integrals of motion. We have also examined the motion of the dual variable after quenching. We find an analytic form for the time period of oscillation of the soliton by computing periodicity of motion of the dual variables in the complex plane. \\ In Section 4, we derive and study the field theory formulation of this model under the continuum limit and present the corresponding soliton solutions in terms of meromorphic fields. For this discussion, the effects of the external potential are neglected as the effective length of the box was taken to be infinity. Here the particles are essentially replaced by a density field. The integrability and other rich properties of the underlying particle systems suggest that the corresponding fluid mechanical equations are also integrable and point to the existence of soliton solutions for the HC field theory (without external potential). We find out an analytic form to represent the equilibrium density distribution for the background density profile as well as for soliton solutions. We find that the equilibrium density, i.e., the background density (absence of solitons) is similar to a hyperbolic version of the trigonometric equations that appear in the context of large-N gauge theories \cite{gross1993possible, wadia1980n}. We also provide an analytic expression for soliton velocity and expressed its connection with the motion of the dual variable in the complex plane\\ Finally, in Section 5, we state our conclusions and provide directions of our future investigation. \section{Dual Hyperbolic Calogero system and formation of the Hamiltonian} \label{dhc} In this section, we aim to formulate the first-order dual equations of motion for the dynamical system of particles of the confined HC model. To start with, we consider a system of $N$ particles with coordinates $x_i$, $i = 1, \dots , N$, and $M$ dual-particles with coordinates $z_n$, $n = 1, \dots , M$, moving in the complex plane obeying the {\it first-order} equations of motion \cite{mainmanas}, \begin{eqnarray} \label{xdot} \dot{x_{i}}-i\frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)=-i\frac{g}{L}\sum_{j\neq i}^{N}&&\coth\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&+i\frac{g}{L}\sum_{n=1}^{M}\coth\left(\frac{x_{i}-z_{n}}{L}\right) \end{eqnarray} \begin{eqnarray} \label{zdot} \dot{z_{n}}-i\frac{A}{L}\sinh\left(\frac{2z_{n}}{L}\right)=i\frac{g}{L}\sum_{m\neq n}^{M}&&\coth\left(\frac{z_{n}-z_{m}}{L}\right)\nonumber\\ &&-i\frac{g}{L}\sum_{i=1}^{N}\coth\left(\frac{z_{n}-x_{i}}{L}\right) \end{eqnarray} These are a set of two first order coupled differential equations describing the motion of $N$ values of $x_i$ and $M$ values of $z_n$. The dynamics are fully described by the initial values of these $M$+$N$ variables. One can show that if the initial values of $x_i$ are chosen to be real, they remain so for all future times. Using this formalism we can map the motion of $N$ particles moving in real axis to motion of $M$ dual variables moving in the complex plane. The number of dual variables are not related to the numbers of particles in the real axis, i.e. this formalism is valid even for $M<N$. Remarkably the second order equations completely decouple from each other. They are of the following form, \begin{eqnarray} \label{xddot} \ddot{x_{i}}&=&-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2x_{i}}{L}\right)\cosh\left(\frac{2x_{i}}{L}\right)+\frac{2Ag}{L^{3}}(N-M-1)\sinh\left(\frac{2x_{i}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\sum_{j\neq i}^{N}\left(\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}\right)\textbf{\hspace{1.5 in}$j=1.....N$ } \end{eqnarray} \begin{eqnarray} \label{zddot} \ddot{z_{n}}&=&-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2z_{n}}{L}\right)\cosh\left(\frac{2z_{n}}{L}\right)+\frac{2Ag}{L^{3}}(N-M+1)\sinh\left(\frac{2z_{n}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\sum_{m\neq n}^{M}\left(\frac{\cosh\left(\frac{z_{n}-z_{m}}{L}\right)}{\sinh^{3}\left(\frac{z_{n}-z_{m}}{L}\right)}\right)\textbf{\hspace{1.5 in}$n=1.....M$ } \end{eqnarray} Once the initial condition of positions and conjugate momenta for the particles (on the real line) are obtained using first order Eq.~\ref{xdot} we can study the dynamics of those particles from Eq.~\ref{xddot} without requiring further information about the dual variables.\\ The Hamiltonian corresponding to Eq.~\ref{xddot} is \begin{eqnarray} {\cal H} &=& \sum_{i=1}^{N}\left(\frac{p_i^{2}}{2}+\frac{A^{2}}{2L^{2}}\sinh^{2}\left(\frac{2x_{i}}{L}\right)-\frac{Ag}{L^{2}}(N-M-1)\cosh\left(\frac{2x_{i}}{L}\right)\right)\nonumber\\ &+& \sum_{i,\,j\neq i}^{N}\frac{1}{2L^{2}}\left(\frac{g^{2}}{\sinh^{2}\left(\frac{x_{i}-x_{j}}{L}\right)}\right) \label{HyCM} \label{Hsquare} \end{eqnarray} \subsection{Multi-Soliton Solutions} Solitons are excitations (solitary pulse like structures) that are formed due to the collective motion of the particles. These solitons do not disperse or break down as the particles move with time. The resulting density profile (collective behaviour) is a robust excitation which does not break or disperse. Soliton solutions belong to a very special set of initial conditions in the space of initial conditions where the motion of all the particles shows a coherent behaviour. This occurs due to the delicate interplay between non-linearity, non-locality and dispersive effects. This is very sensitive to system parameters. We will further analyse its structure and behaviour in later sections.\\ We will now present the way by which the initial conditions for soliton solutions can be obtained in general. Note that finding soliton solutions means restricting the space of initial conditions of $N$ values of $x_i$ and $p_i$. By equating the imaginary part of Eq.~\ref{xdot}, we get the following equation, \begin{eqnarray} \label{fixsol} -\frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)&=&-\frac{g}{L}\sum_{j\neq i,\atop j=1}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&+\frac{g}{L}Re\sum_{n=1}^{M}\left[\coth\left(\frac{x_{i}-z_{n}}{L}\right)\right]\textbf{\hspace{0.5 in}$i=1......N$} \end{eqnarray} Equating the real part of Eq.~\ref{xdot}, we get the conjugate momenta ($p_i \equiv \dot{x}_i$), \begin{eqnarray} p_{i}=\frac{g}{L}Im\left[\sum_{n=1}^{M}\coth\left(\frac{x_{i}-z_{n}}{L}\right)\right] \textbf{\hspace{0.5 in}$i=1......N$} \label{fixvsol} \end{eqnarray} The fixed points of these set of $N$ equations in Eq.~\ref{fixsol} give the equilibrium position of particles for obtaining soliton solution. Essentially, at that point $ \dot{x_{i}}=0$ for all $i=1,2... N$. We can equivalently write Eq.~\ref{fixsol} as, \begin{eqnarray} \label{dudx} \frac{\partial U}{\partial x_{i}}=0\textbf{\hspace{0.5 in}$i=1......N$} \end{eqnarray} where, \begin{eqnarray} \frac{\partial U}{\partial x_{i}}= \frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)+\frac{g}{L}\sum_{j\neq i,\atop j=1}^{N}&&\coth\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&-\frac{g}{L}Re\sum_{m\neq n}^{M}\left[\coth\left(\frac{x_{i}-z_{n}}{L}\right)\right] \end{eqnarray} Therefore, we can form a damping equation \cite{polymanas17} for a chosen set of $z$ values, \begin{eqnarray} \label{damp} \dot{x}_{i}=-\gamma\frac{\partial U}{\partial x_{i}} \end{eqnarray} where $\gamma$ can be considered as some damping coefficient. This is actually the numerical way we employ for solving Eq.~\ref{fixsol} for obtaining solutions corresponding to a local minimum. The damping equation in principle slides the particles towards the minimum of the above potential. The damping acts like a viscous force which slides the particles towards equilibrium position. This will finally give us the special set of $\{ x_i(t=0)\}$ and the special set $\{ p_i (t=0) \}$ is obtained from Eq.~\ref{fixvsol}. \subsection{Background} The background constitutes the position of particles which corresponds to $M=0$ (no solitonic excitations). It gives us a static solution. From Eq.~\ref{fixvsol}, it is clear that when there is no dual variable, we have $N$ values of $p_j$ to be equal to $0$. Hence, this is called the static solution. No soliton formation occurs. The particles just sit in their equilibrium position. For this situation, Eq.~\ref{fixsol} becomes, \begin{eqnarray} \label{background} \frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)&=&\frac{g}{L}\sum_{j\neq i\atop j=1}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right)\textbf{\hspace{0.5 in}$i=1......N$} \end{eqnarray} Solving the corresponding damping equation, we get the positions of $j$ particles. We then plot the density of the particles as a function of position (see Fig.~\ref{fig:bkg301}). Note that, just for plotting purpose in Fig.~\ref{fig:bkg301}, the density $\rho(x)$ is defined as the inter-particle distance and the position index is taken as the mean position between the corresponding two particles. This should not be confused with the classical density field $\rho(x)$ that we introduce later in the field theory section (Sec.~\ref{cft}). \subsection{Relationship of background solutions with generalized Log gas} \label{glog} It is interesting to note that the equilibrium solutions of the classical HC model (Eq.~\ref{Hsquare} with $M=0$) which is given by Eq.~\ref{background} is also the minimum energy configuration of a generalized version of Log gas given by, \begin{equation} V_{\log} =\frac{A}{2}\sum_{i=1}^N\cosh \bigg( \frac{2 x_i}{L}\bigg) - \frac{g}{L}\sum_{i \neq j}^N\frac{1}{2} \log \Big| \sin \bigg(\frac{ x_i - x_j }{L} \bigg) \Big| \label{vglog} \end{equation} Although, there has been a great deal of work on connections between traditional Log gas and Random Matrix Theory \cite{forrester2010log,o2010gaussian,gustavsson2005gaussian,majumdar2014top} and their relation to Calogero-Moser systems \cite{CalogeroMosermodel}, to the best of our knowledge, little is understood about the relationship between classical HC model, the generalized version of Log gas and Random Matrix Theory. It is to be noted that trigonometric version of the above generalized Log gas (Eq.~\ref{vglog}) effectively appear in the context of large-N gauge theories \cite{gross1993possible,wadia1980n}. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{bkgcompact1.png} \caption{(Left) Various confining potentials (Right) Density profile without the soliton $\rho_0(x)$. There is no dual variable, M=0, L=5, Number of particles N = 300, g=0.5. A=150. Points represent brute-force numerics and line represents our analytical expression. } \label{fig:bkg301} \end{figure} We get a plateau like graph where the density $\rho_0$ is essentially constant throughout the characteristic length of the box and falls sharply after that. This replicates the flatness of the external potential within the box-like potential. We were able to obtain the exact functional form of this curve which will be discussed in the field theory section (Sec~4.). We will also show the variation of density with system parameters in Sec~4. \subsection{One soliton solutions} We now consider the case where $M=1$. This corresponds to one dual variable moving in the complex plane. For this case Eq.~\ref{fixsol} becomes, \begin{eqnarray} \frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)=-\frac{g}{L}\sum_{i,\,j\neq i}^{N}\coth\left(x_{i}-x_{j}\right)+\frac{g}{L}Re\left[\coth\left(\frac{x_{i}-z}{L}\right)\right] \label{solone} \end{eqnarray} The momenta are given by, \begin{eqnarray} p_{i}=\frac{g}{L}Im\left[\coth\left(\frac{x_{i}-z}{L}\right)\right]\textbf{\hspace{0.5 in}$i=1......N$} \end{eqnarray} After obtaining the initial conditions, we get the corresponding density profile. The density profile is plotted here by calculating inverse of inter-particle distance (see Fig.~\ref{fig:density101_L5}). \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{1solbkgb.png} \caption{One soliton density profile for 300 particles. Since, its a one soliton solution, only one dual variable was needed. The blue line represent our ansatz for the soliton (Eq.~\ref{anzsol}). We used L=5, g=0.5, A=150. The dual variable was at $z(t=0) = 0.078356i$ in the complex plane} \label{fig:density101_L5} \end{figure} We can observe a bump at the origin. This is essentially the soliton. This forms due to the dual variable which in principle acts like an attractor of particles, thus increasing the density near the origin. We have observed that the height of the bump depends on the distance of the dual variable from the real axis. The lesser the distance of the dual particle from real axis, more is height of the resulting soliton. We have made an ansatz for the analytic form of the soliton which we will discuss later in the field theory section (Sec. 4).\\ Using numerical simulations, we have observed the evolution of particles using the second order differential equations, Eq.~\ref{xddot}. The particle trajectories are plotted in Fig.~\ref{fig:101} (world-lines). We also examined the evolution of soliton density with time (see Fig.~\ref{fig:one}). \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{101b.png} \caption{World lines for $101$ particles (we haven't shown the 300 particles plot for better clarity). Here g=0.5, A=50.5. The dual variable was at $z(t=0) =0.078356i$ in the complex plane. Each line belongs to an individual particle. Therefore, the particle trajectories of 101 particles are plotted here. The wave-like curve is the result of coherent motion and corresponds to a single soliton. As we can see, the particles are always bounded within the box.} \label{fig:101} \end{figure} We observe a coherent motion of the particles as time evolves. As a result there is a perfect wave-like motion which can be seen very clearly, though individually they move only a little from their equilibrium positions. This is the analog of the Newton's Cradle \cite{kinoshita2006quantum}. The soliton maintains its form and does not break or disperse which is exactly what we expect from a soliton evolution. This kind of robust evolution is highly non-trivial and involves a delicate balance between various effects such as non-linearity, non-locality and dispersion. We also checked the integrability of the system, which is essentially examining the integrals of motion. We checked the $1^{st}$ integral which is the energy of the system and also the $2^{nd}$ integral of motion. These quantities were conserved with very high accuracy for very long times.\\ Soliton stability analysis is a subject of great interest and we plan to address this for the HC model in future. Our numerics indicate that by slightly perturbing the soliton solution, we still retain robust behaviour at least for several many time periods, i.e., for a very considerable long time \footnote{We thank E. Bogomolny for pointing us to this interesting problem for the Hyperbolic Calogero model. }. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{1solbkgf.png} \caption{Time evolution of the density profile. There is a single soliton which oscillates from one end of the box-like potential to the other. The soliton does not disintegrate as time evolves, i.e, it remains fully robust. Here A=150 , N=300, g=0.5. The dual variable at $z(t=0)= 0.078356i$} \label{fig:one} \end{figure} \subsection{Multi-soliton evolutions (two ,three and four soliton solutions)} It is important to note that we can find multi-soliton solutions in the confined HC model by exploiting the $M<N$ duality. The existence of multi-soliton solutions is a consequence of classical integrability of the confined HC model. In this section, we find the multi-soliton solutions. Further, here we check the effects of multi-soliton collisions. For $M=2$, $M=3$ and $M=4$ there are interactions between the dual variables too. As a result, we expect to see interesting dynamics in the complex plane as well. The motion of dual variables will be discussed in later sections (Sec.~3). \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{3solgiff.png} \caption{Three soliton density profiles at different times. The solitons are of different heights and pass through each other. Note that the densities are constructed as inverse of inter-particle distance. Here M=3, L=5, N=300, g=0.5 and A=150. The three dual variables are at $z_1(t=0)= 1.75+0.118356i$, $z_2(t=0)= 0.098356i$ and $z_3(t=0)= -1.75+0.078356i$} \label{fig:3diff} \end{figure} In Fig.~\ref{fig:3diff}, we show three soliton solutions and their evolutions. As can be seen, they pass through each other which is a consequence of integrable nature of the solitons. In Fig.~\ref{fig:density_dynamics}, we show the soliton train diagram for two, three and four solitons (left panel in Fig.~\ref{fig:density_dynamics}). It is important to note that the position of real part of the dual variables determine the position of the solitons, the magnitude of the imaginary parts determine the height of the solitons (greater the magnitude, lesser the height) and finally the signs of the imaginary part dictate in which direction the solitons will move (if the sign is negative, the soliton moves right and if the sign is positive, the soliton moves to the left). The right panel in Fig.~\ref{fig:density_dynamics} shows the motion of the guiding centre of the solitons. It can be seen that the centres of the solitons pass through each other and also bounce off the walls. \begin{figure}[H] \centering \includegraphics[width=0.92\linewidth]{solcolcompact2.png} \caption{Soliton train diagram describing the evolution of two solitons (top left), three solitons (middle left) and four solitons (bottom left). Initially for two solitons, the dual variables were at $z_1(t=0) = 1.25+0.078356i$ and $z_2(t=0) = -1. 25-0.138356i$, for three solitons the dual variables were at $z_1(t=0) =1.75+0.118356i$, $z_2(t=0) =0.098356i$ and $z_3(t=0) =-1.75-0.078356i$ and for the four solitons the dual variables were at $z_1(t=0) =1.75+0.078356i$, $z_2(t=0) =0.75+0.108356i$, $z_3(t=0) = -0.75+0.138356i$ and $z_4(t=0) = -1.75-0.158356i$. The solitons passes through each other without getting destroyed. Adjacent to that, we have plotted the time evolution of the soliton guiding centres for two solitons (top right), three solitons (middle right) and four solitons (bottom right).} \label{fig:density_dynamics} \end{figure} \subsection{Quenching} In this section we see the effects of suddenly changing a system parameter such as the coupling constant $g$. We mainly examine the changes in soliton evolution due to this quench \cite{franchini2015universal,franchini2016hydrodynamics}.\\ \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{quenchfinaly3.png} \caption{Here N=300, A=150 and the dual variable was $z(t=0)= 0.078356i$. (a) We see that before quenching the soliton oscillates normally. (b) When the parameter g is quenched from 0.5 to 0.8 we see that the soliton just breaks/splits (c) Ripples move in the opposite direction. (d) After some long time, the density profile is distorted with some ripples.} \label{fig:testc1} \end{figure} In Fig.~\ref{fig:testc1}, we observe that the soliton breaks down and ripples are formed which bounces back and forth inside the box-like potential (see caption in Fig.~\ref{fig:testc1}). In Fig.~\ref{fig:testc}, we see that, when the coupling constant is decreased the particles repel each other with a lesser strength and so they can come much closer to each other. On the other hand, if the coupling constant is increased (Fig.~\ref{fig:testc1}), the exact opposite phenomena occurs. There is a discontinuity in the energy during quenching which is expected as the interaction as well as the external potential depends on $g$, but the new energy (post-quench) remains constant. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{smallquenchfinaly3.png} \caption{Here N=300, A=150 and the dual variable was $z(t=0)= 0.078356i$. (a) We see that before quenching the soliton oscillates normally. (b) When the parameter g is quenched from 0.5 to 0.25, we see that the soliton just breaks at that moment. (c) As we progress in time, we see that dips are formed. (d) After some long time, the density gets distorted. } \label{fig:testc} \end{figure} \section{Dynamics of Dual particles in complex plane} \label{ddc} In this section, we focus on the motion of dual variables corresponding to one and two-soliton solution. We find the connection between the motion of dual variables and the real Calogero particles. We also check for the energy conservation for the dual system. We find an analytic solution of a single dual variable motion in the small $y$ limit and the time period of motion. We also examine the effect of quenching on the dual variable. \subsection{Single Dual variable corresponding to one soliton solution} Once the initial condition for the position of real variables $\{x_i (t=0)\}$ are obtained from the damping equation, i.e., Eq.~\ref{damp}, we can find the initial momenta of the dual variables using Eq.~\ref{zdot}. For a single $z$, Eq.~\ref{zdot} takes the form, \begin{eqnarray} \label{1z dot} \dot{z}-i\frac{A}{L}\sinh\left(\frac{2z}{L}\right)=-i\frac{g}{L}\sum_{j=i}^{N}\coth\left(\frac{z-x_{j}}{L}\right) \end{eqnarray} We choose the initial position of the dual variable on the imaginary axis, i.e. $Re[z]=0$. In the corresponding density plot, we see that the initial position of the soliton is centred at the origin. Once the initial position and momenta are determined, the evolution is governed by Eq.~\ref{zddot}. \begin{eqnarray} \label{1zddot} \label{single_z} \ddot{z}=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2z}{L}\right)\cosh\left(\frac{2z}{L}\right)+\frac{2Ag}{L^{3}}N\sinh\left(\frac{2z}{L}\right) \end{eqnarray} We solve the differential equation numerically to get Fig.~\ref{fig:cplane_1sol_L5_t=3_g1_A31_n31b} which forms a rectangle-like trajectory in the complex plane. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{dual1.png} \caption{Single dual variable in complex plane. The initial position is shown in the figure with the blue dot. One complete revolution of this dual variable corresponds to one complete oscillation of the soliton density profile. This figure describes the movement of dual particle corresponding to dynamics shown in the world line Fig. 3 } \label{fig:cplane_1sol_L5_t=3_g1_A31_n31b} \end{figure} We see that the motion of the dual variable is also confined within a box-like potential. It forms a closed trajectory in the complex plane which is like a smeared rectangle. Thus, it is periodic and we expect it to have a definite time period. As the dual variable moves, it drags the soliton with it. This gives us insight that one complete cycle of the dual variable corresponds to one complete oscillation of the soliton. Thus we expect to find exact same time period for them. We will discuss it in a later section. \subsection{Two Dual variables corresponding to two soliton solution} We repeat the same mechanism for two dual variables instead of one. In this case, there will be an interaction term in the governing equation. Therefore, the analogous equations to Eq.~\ref{1z dot} and Eq.~\ref{1zddot} are, \begin{eqnarray} \dot{z_{1}}=i\frac{A}{L}\sinh\left(\frac{2z_{1}}{L}\right)+i\frac{g}{L}\coth\left(\frac{z_{1}-z_{2}}{L}\right)-i\frac{g}{L}\sum_{i=1}^{N}\coth\left(\frac{z_{1}-x_{i}}{L}\right)\nonumber\\ \dot{z_{2}}=i\frac{A}{L}\sinh\left(\frac{2z_{2}}{L}\right)+i\frac{g}{L}\coth\left(\frac{z_{2}-z_{1}}{L}\right)-i\frac{g}{L}\sum_{i=1}^{N}\coth\left(\frac{z_{2}-x_{i}}{L}\right) \end{eqnarray} and \\ \\ \begin{eqnarray} \ddot{z_{1}}=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2z_{1}}{L}\right)\cosh\left(\frac{2z_{1}}{L}\right)+\frac{2Ag}{L^{3}}&&(N-1)\sinh\left(\frac{2z_{1}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\left(\frac{\cosh\left(\frac{z_{1}-z_{2}}{L}\right)}{\sinh^{3}\left(\frac{z_{1}-z_{2}}{L}\right)}\right)\nonumber \end{eqnarray} \begin{eqnarray} \ddot{z_{2}}=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2z_{1}}{L}\right)\cosh\left(\frac{2z_{2}}{L}\right)+\frac{2Ag}{L^{3}}&&(N-1)\sinh\left(\frac{2z_{2}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\left(\frac{\cosh\left(\frac{z_{2}-z_{1}}{L}\right)}{\sinh^{3}\left(\frac{z_{2}-z_{1}}{L}\right)}\right) \end{eqnarray} \begin{figure}[H] \centering \includegraphics[width=0.75\linewidth]{dual2.png} \caption{Two dual variables at different $Im[z]$. These correspond to two solitons of different heights. This shows why the solitons pass through each other. We see that the two dual variables seem to interact very weakly and move fairly independently in their own orbits. So the corresponding solitons also move independently without interacting, i.e., the solitons just pass through. Here A=150, N=300, g=0.5. The above plot is over several time periods. The initial positions of the dual variables are shown in the figure with the blue and red dot.} \label{fig:diff_heightz} \end{figure} Fig.~\ref{fig:diff_heightz} shows the trajectory of the two dual variables. We see that they essentially do not change their trajectory. This is also reflected in the motion of the soliton. We have discussed earlier that the solitons pass through each other unhindered. It is very non-trivial to understand the reason just by observing the real particles. The motion of the dual variables on the other hand gives a more transparent intuition into the interaction process. We have stated earlier that the dual variables drag the solitons with them. \subsection{Effects of quenching on Dual variable} We have seen that due to quenching the soliton breaks and the particles either spread out a bit or get contracted inward. This result is reflected in the motion of the dual variable. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{quench_z_101_g05_08d} \caption{The two figures shows the effect on dual variable due to quenching. Though the dual variable maintains the same kind of trajectory but the mapping no longer is maintained, so the evolution is not at all like a soliton evolution. Here A=150, N=300. The initial position is shown in the figures with the red dot.\\ (Left) When g is increased the particles repel each other more and hence they spread out. Hence the dual variable move in a larger orbit\\ (Right) When g is decreased the repulsion decreases hence they shrink to smaller region. Correspondingly the dual variable move in a smaller orbit} \label{fig:quench_z_101_g05_08d} \end{figure} We observe that after quenching, the dual variable either shifts to a larger closed curve or a smaller one (depending on the nature of the quench). If the value of coupling constant $g$ is increased the particle spreads out, so the dual variables also move in a bigger orbit and if $g$ is decreased the particles get contracted and so the $z$ moves in a smaller orbit.\\ We note that although the real Calogero particles upon quench break into waves, the dual variables still seem to have an ordered motion. So, it is evident that the mapping in general breaks as it is no longer a soliton evolution. Since the post-quench dynamics is no longer a soliton evolution, a single-z dual variable is not sufficient to describe the collective behaviour of Calogero particles. \subsection{Analytic solution of Dual variable in small y limit} In this section, we solve the equation of motion for $z$ to find an analytic form of the solution. Our aim is to then find a formula for the periodicity of its orbit. Then, we can actually match this with our simulation results. We need to make the small-y approximation which essentially means that the imaginary part of $z$ is very small in comparison to the length of the box. This assumption is justified. Indeed, for having a proper soliton formation, the dual variable must be quite close to the x-axis. This is true for our model and system parameters. The equation of motion for a single dual variable is given by Eq.~\ref{single_z}.\\ Writing the real part and imaginary part of $\cosh(x+iy)$ and then taking small-y limit, i.e, $\cos(y)\rightarrow1$, we get (for the real part of $z$), the following, \begin{eqnarray} \ddot{x}=-\frac{A^{2}}{L^{3}}\sinh\left(\frac{4x}{L}\right)+\frac{2Ag}{L^{3}}N\sinh\left(\frac{2x}{L}\right) \end{eqnarray} To get the periodicity, we can look at time evolution of the above equation. The solution of the above equation is, \begin{eqnarray} x=L\left[\tan^{-1}\left(c_{1}\mathit{\mathrm{JacobiSN}(c_{2}it,m)}\right)\right] \end{eqnarray} Above, $c_1$ and $c_2$, are determined by the initial conditions, $x(0)$ and $\dot{x}(0)$. This \textit{JacobiSN} function is one of the Jacobi elliptic functions. This function is periodic in nature. The periodicity here depends only on the parameter $m$ which in turns depends of the initial position, initial velocity and system parameters such as $g$ and $A$. The exact dependence of $m$ on these parameters are not explicit but we know the periodicity as a function of $m$ which is, \begin{eqnarray} \label{timeP} T=\frac{\mathrm{Re}\left[4\mathrm{EllipticK(1-m)}\right]}{c_{2}} \end{eqnarray} where $\mathrm{EllipticK(x)}$ is the complete elliptic integral of $1^{st}$ kind. \begin{eqnarray} \mathrm{EllipticK(m)}=\int_{0}^{\frac{\pi}{2}}\left(\frac{1}{\sqrt{1-m\sin^{2}(\theta)}}\right)d\theta \end{eqnarray} This result matches extremely well with our simulation results. \section{Collective field theory formulation} \label{cft} In this section, we derive the collective field theory for the HC model. Under the continuum limit, we choose $N\rightarrow \infty$. Further we neglect the effects of confining potential during the formulation of the Hamiltonian. Here, the position and momentum of the individual variables get replaced by continuous density field $\rho(x)$ and velocity field $v(x)$. We aim to formulate the Hamiltonian as a function of these fields. Then, we form the continuity equation and Euler equation and finally we try to establish the ansatz for the analytic form for several things such as background solutions, soliton solutions etc; \subsection{Formulation of Hamiltonian} The general Hamiltonian without the external potential is of the form \cite{polychronakos1992new}, \begin{eqnarray} {\cal H}=\sum_{i=1}^{N}\left\{ p_{i}^{2}+\sum_{j\neq i}^{N}\frac{g^{2}}{2L^{2}}\left(\frac{1}{\sinh^{2}\left(\frac{x_{i}-x_{j}}{L}\right)}\right)\right\} \end{eqnarray} In the continuous limit, we replace the position of individual variables by a position function such that $x(j)=x_j$. This is assumed to be a smooth function. The derivative of this function is related to the density field as, \begin{eqnarray} \label{xprimej} x'(j)=\frac{dx}{dj}=\frac{1}{\rho(x)} \end{eqnarray} We can show that, under the continuum limit, we get (see Appendix B for details), \begin{eqnarray} \lim_{N\rightarrow\infty}\sum_{j\neq i}^{N}\frac{g^{2}}{2L^{2}}\left(\frac{1}{\sinh^{2}\left(\frac{x(i)-x(j)}{L}\right)}\right)=g\left(\pi\rho^{\mathrm{H}}-\partial_{x}log\sqrt{\rho(x)}\right)^{2} \end{eqnarray} Then the Hamiltonian becomes, \begin{eqnarray} {\cal H}=\int dx\rho(x)\left[\frac{v^{2}}{2}+\frac{1}{2}\left(\pi g\rho^{\mathrm{H}}-g\partial_{x}log\sqrt{\rho(x)}\right)^{2}\right]+const \end{eqnarray} where $\rho(x)^\mathrm{H}$ is the Hilbert transform of $\rho(x)$ and is defined as in \cite{stone,polymanas17}, \begin{eqnarray} \label{hilbert} \rho(x)^{\mathrm{H}}=\frac{1}{\pi L}P\left\{ \int_{-\infty}^{\infty}\left[\rho(\tau)\coth\left(\frac{\tau-x}{L}\right)d\tau\right]\right\} \end{eqnarray} The detailed discussions about the Hilbert transform is presented in Appendix C. \subsection{Analytic form of the background density} \subsubsection{Analytic form from background equations} In this section, we will provide an analytic form of the density profile without the formation of the soliton. First, we form an equivalent field theory version of the background equation, Eq.~\ref{background}. So, we have, \begin{eqnarray} \label{startbackground} \frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)=\frac{g}{L}\sum_{j\neq i}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right) \end{eqnarray} Also, we know, \begin{eqnarray} \rho(x)=\sum_{i=1}^{N}\delta(x-x_{i}) \end{eqnarray} So multiplying the above equation on both sides of Eq.~\ref{startbackground} we have, \begin{eqnarray} \frac{A}{L}\sum_{i=1}^{N}\delta(x-x_{i})\sinh\left(\frac{2x_{i}}{L}\right)=\frac{g}{L}\sum_{i=1}^{N}\sum_{j\neq i}^{N}\delta(x-x_{i})\coth\left(\frac{x_{i}-x_{j}}{L}\right) \end{eqnarray} \begin{eqnarray} \frac{A}{L}\rho(x)\sinh\left(\frac{2x}{L}\right)=\frac{g}{L}\lim_{N\rightarrow\infty}\sum_{j\neq i}^{N}\rho(x)\coth\left(\frac{x-x_{j}}{L}\right) \end{eqnarray} Now from Appendix B (Part 1) we know, \begin{eqnarray} \frac{g}{L}\lim_{N\rightarrow\infty}\sum_{j\neq i}^{N}\rho(x)\coth\left(\frac{x-x_{j}}{L}\right)=g\rho(x)\left(\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi\rho(x)^{\mathrm{H}}\right) \end{eqnarray} So we have, \begin{eqnarray} \frac{A}{L}\rho(x)\sinh\left(\frac{2x}{L}\right)=g\rho(x)\left(\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi\rho(x)^{\mathrm{H}}\right) \end{eqnarray} which gives, \begin{eqnarray} \label{Fieldbackground} \frac{A}{L}\sinh\left(\frac{2x}{L}\right)+g\pi\rho(x)^{\mathrm{H}}=g\frac{\partial}{\partial x}\ln\sqrt{\rho(x)} \label{brho} \end{eqnarray} We shall argue later that in the large-N limit, we can ignore the $\log$ term (i.e., the right hand side of Eq.~\ref{brho}). Keeping this in mind, we propose an ansatz for the functional form of the background density, \begin{eqnarray} \rho_{o}(x)&&=G\cosh\left(\frac{x}{L}\right)\sqrt{R^2-\sinh^{2}\left(\frac{x}{L}\right)}\textbf{\hspace{0.5in}}(|x|<L\sinh^{-1}R)\nonumber\\ &&=0\textbf{\hspace{2.6in}}otherwise \end{eqnarray} where the parameters $G$ and $R$ will be fixed later. We will now prove that this ansatz satisfies Eq.~\ref{Fieldbackground}. We have, \begin{eqnarray} \rho_{0}(x)^{\mathrm{H}}=P\left(\int_{-\infty}^{\infty}\frac{G}{\pi L}\cosh\left(\frac{\tau}{L}\right)\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}\coth\left(\frac{\tau-x}{L}\right)d\tau\right) \end{eqnarray} We now split the integrals into two parts. One, in which the integrand contains only the regular part of the integral and other which contains the singular part and so that it needs to be evaluated in the principal value sense. Therefore we have, \begin{eqnarray} \label{bkndI1I2} &&\rho_{0}(x)^{\mathrm{H}}=\int_{-L_{1}}^{L_{1}}\frac{G}{\pi L}\cosh\left(\frac{\tau}{L}\right)\left[\sinh^{2}\left(\frac{x}{L}\right)-\sinh^{2}\left(\frac{\tau}{L}\right)\right]\frac{\coth\left(\frac{\tau-x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau\nonumber\\ &&+P\left(\int_{-L_{1}}^{L_{1}}\frac{G}{\pi L}\cosh\left(\frac{\tau}{L}\right)\left(R^{2}-\sinh^{2}\left(\frac{x}{L}\right)\right)\frac{\coth\left(\frac{\tau-x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau\right) \end{eqnarray} where $L_1=L\sinh^{-1}R$. So, \begin{eqnarray} I_1=\int_{-L_{1}}^{L_{1}}\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)\Bigg[\cosh\left(\frac{2x}{L}\right)&&-\cosh\left(\frac{2\tau}{L}\right)\Bigg \frac{\coth\left(\frac{\tau-x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau\nonumber\\ \end{eqnarray} \begin{eqnarray} =\int_{-L_{1}}^{L_{1}}-\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)\frac{\sinh\left(\frac{2x}{L}\right)+\sinh\left(\frac{2\tau}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau \end{eqnarray} \begin{eqnarray} =\int_{-L_{1}}^{L_{1}}-\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)&&\frac{\sinh\left(\frac{2x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau\nonumber\\ &&-\int_{-L_{1}}^{L_{1}}\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)\frac{\sinh\left(\frac{2\tau}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau \end{eqnarray} Therefore we again have two separate integrations. The integral, \begin{eqnarray} \int_{-L_{1}}^{L_{1}}\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)\frac{\sinh\left(\frac{2\tau}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau=0 \end{eqnarray} as the integrand is an odd function and the limits of integration is symmetric about $\tau$. So we need to evaluate the other integral (denoted by $I_3$). \begin{eqnarray} I_3=\int_{-L_{1}}^{L_{1}}-\frac{G}{2\pi L}\cosh\left(\frac{\tau}{L}\right)\frac{\sinh\left(\frac{2x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau \end{eqnarray} Changing the variable $\sinh(\frac{\tau}{L})=z$ we have $\cosh(\frac{\tau}{L})d\tau=Ldz$ we have, \begin{eqnarray} I_3&=&\int_{-R}^{R}-\frac{G}{2\pi L}\frac{\sinh\left(\frac{2x}{L}\right)}{\sqrt{R^{2}-z^{2}}}Ldz \nonumber\\ &=&\frac{G}{2}\sinh\left(\frac{2x}{L}\right) \label{i3eq} \end{eqnarray} Now, we need to evaluate the other part (i.e., the principal part) of Eq.~\ref{bkndI1I2} which is, \begin{eqnarray} I_2=\frac{G}{\pi L}\left[R^{2}-\sinh^{2}\left(\frac{x}{L}\right)\right]P\left(\int_{-L_{1}}^{L_{1}}\cosh\left(\frac{\tau}{L}\right)\frac{\coth\left(\frac{\tau-x}{L}\right)}{\sqrt{R^{2}-\sinh^{2}\left(\frac{\tau}{L}\right)}}d\tau\right) \end{eqnarray} We have performed this integration using brute force numerical technique and found it to be equal to 0 (with machine precision). Hence, we can safely say that this principal value integral is equal to 0.\\ So in order to satisfy Eq.~\ref{brho} (but without the log term) we must have $G=\left(\frac{2A}{\pi Lg}\right)$ (since it has to obey Eq.~\ref{i3eq}). So, our proposed ansatz for the background density function stands, \begin{eqnarray} \rho_{o}(x)=\left(\frac{2A}{\pi Lg}\right)\cosh\left(\frac{x}{L}\right)\sqrt{R^{2}-\sinh^{2}\left(\frac{x}{L}\right)} \end{eqnarray} One way to justify ignoring the Log term is as follows. If we take Eq.~\ref{brho} and rescale as follows, $\rho(x)= N \tilde \rho(x)$, then we get, \begin{equation} \frac{A}{L}\sinh\left(\frac{2x}{L}\right)+g\pi N \tilde{\rho}(x)^{\mathrm{H}}=g\frac{\partial}{\partial x}\ln\sqrt{\tilde{\rho}(x)} \end{equation} Considering that $A\sim O(N)$, we notice that the Log term is $1/N$ suppressed which therefore can be neglected in large-N limit. The irrelevance of the Log term can also be seen in an alternate way in the next discussion (Sec. \ref{aft}) on field theory description of the Hamiltonian. It is interesting to note that a trigonometric version of Eq.~\ref{startbackground} (discrete) and of Eq.~\ref{Fieldbackground} (continuum and without $\log$ term) appears in the context of Gross-Witten-Wadia phase transition in large-N gauge theories \cite{gross1993possible, wadia1980n} which was analysed using the approach of Brezin, Itzykson, Parisi, and Zuber \cite{brezin1978brezin}. In order to establish a strong evidence to the above analytical result, we have matched our ansatz with numerical simulation of the density profiles with various parameters (see Fig.~\ref{fig:tr}). We find perfect agreement between the analyical expression and brute-force numerics. By imposing, normalization condition, i.e., $\int_{-L\sinh^{-1}(R)}^{L\sinh^{-1}(R)} \rho(x) dx = N$, we get $R=\sqrt{\frac{gN}{A}}$. So the final expression for the background density stands to be, \begin{eqnarray} \label{finalexprho} \rho_{o}(x)=\left(\frac{2A}{\pi Lg}\right)\cosh\left(\frac{x}{L}\right)\sqrt{\left(\frac{gN}{A}\right)-\sinh^{2}\left(\frac{x}{L}\right)} \end{eqnarray} From the above equation we observe that, \begin{eqnarray} \rho_{o}(0)=\left(\frac{2}{\pi L}\sqrt{\frac{AN}{g}}\right) \end{eqnarray} \subsubsection{Analytical form from field theory of the Hamiltonian} \label{aft} Without the need of an ansatz, the analytical solutions for the background density can be obtained, via writing down the large-N field theory of the Hamiltonian Eq.~\ref{Hsquare} and then using variational principle. We can derive the field theory at large $N$ to be \cite{polymanas17}, \begin{eqnarray} {\cal H} =\int_{-\infty}^{+\infty}\Bigg[\frac{1}{2}\rho v^2+\frac{g^2 \pi^2 \rho^3}{6}&+&\frac{A^{2}}{2L^{2}}\sinh^{2}\left(\frac{2x}{L}\right)\rho(x)\nonumber \\ &-&\frac{AgN}{L^{2}}\cosh\left(\frac{2x}{L}\right)\rho(x)\Bigg]\nonumber\\ \label{HsquareF} \end{eqnarray} Taking the variational derivative w.r.t $\rho$, i.e., $\frac{\delta H}{\delta \rho}$ along with a chemical potential $\mu$, gives, \begin{equation} \frac{g^2 \pi^2 \rho^2}{2}+\frac{A^{2}}{2L^{2}}\sinh^{2}\left(\frac{2x}{L}\right)-\frac{AgN}{L^{2}}\cosh\left(\frac{2x}{L}\right) = \mu \end{equation} which immediately gives, \begin{eqnarray} \label{rhoxf} \rho(x) = \frac{\sqrt{2}}{\pi g}\sqrt{\mu -\frac{A^{2}}{2L^{2}}\sinh^{2}\left(\frac{2x}{L}\right)+\frac{AgN}{L^{2}}\cosh\left(\frac{2x}{L}\right)} \end{eqnarray} By putting $\mu = \frac{g N A}{L^2}$ which sets the limits of integration to $\pm \beta$ where $\beta = L \sinh ^{-1} \Big[ \sqrt{\frac{g N}{A}}\Big]$, we satisfy the normalization, $\int_{-\beta}^{+\beta} \rho(x)dx = N $. Plugging, this expression for $\mu$ back into Eq.~\ref{rhoxf}, after some algebra gives exactly the expression Eq. \ref{finalexprho}. Therefore, we arrive at the background density without the need for an ansatz. \subsubsection{Some observation on the background density:} \label{transition} From Eq.~\ref{finalexprho}, we can analyse how the shape of background density changes with different parameter values. The first derivative of that equation gives, \begin{eqnarray} \rho_{o}'(x)=\left(\frac{2A}{\pi Lg}\right) \frac{\left(gN-A\cosh\left(\frac{2x}{L}\right)\right)\sinh\left(\frac{x}{L}\right)}{AL\sqrt{\frac{gN}{A}-\sinh^{2}\left(\frac{x}{L}\right)}} \end{eqnarray} Therefore $\rho_{o}'(x)=0$ for $x=0$ and $\cosh(\frac{2x}{L})=\frac{gN}{A}$. But, the minimum value of $\cosh(\frac{2x}{L})$ is 1 (since it is a $\cosh$ function). Therefore, for $\frac{gN}{A}<1$ we have only one extrema (i.e., $x=0$). For $\frac{gN}{A}>1$ we get two more extrema values (in addition to $x=0$). They can be found to be, \begin{eqnarray} x=\pm L\cosh^{-1}\sqrt{\frac{1}{2}\left(1+\frac{gN}{A}\right)} \label{xtwo} \end{eqnarray} Taking the second derivative we find that we get only one maxima at $x=0$ for $\frac{gN}{A}<1$. For the other case of $\frac{gN}{A}>1$, we get a minima at $x=0$ and two maxima at $x=\pm L\cosh^{-1}\sqrt{\frac{1}{2}\left(1+\frac{gN}{A}\right)}$. In the case of $\frac{gN}{A}=1$ which, for Eq.~\ref{xtwo}, implies $L\cosh^{-1}\sqrt{\frac{1}{2}\left(1+\frac{gN}{A}\right)}=0$, the three extrema points coincide at $x=0$ and we have an inflection point. So, fixing $A$ and $N$ to some value we can observe a transition in the functional form of the background density as we change the coupling constant $g$. At $\frac{gN}{A}=1$ there is a change in the curvature of the background density function which is also evident from Fig.~\ref{fig:tr}. This analysis is closely connected to the Gross-Witten-Wadia phase transition in large-N gauge theories \cite{gross1993possible, wadia1980n}. \begin{figure}[H] \centering \includegraphics[width=0.32\linewidth]{denfuncg_01a.png} \includegraphics[width=0.32\linewidth]{denfuncg1a.png} \includegraphics[width=0.32\linewidth]{denfuncg50a.png} \caption{Background density in the three regimes with $N=300$, $A=300$, $L=5$. The solid line represents the analytical results using Eq. \ref{finalexprho} and the red dots represent the brute force numerical data using Eq.~\ref{damp}. (Left) The case of $g<1$ (g=0.1) where we have one maxima and a dome-like density profile. (Middle) The case when g=1 which has an inflection point (table-top structure). (Right) This case when $g>1$ (g=50), which has one minima and two maxima. In this case, the particles gets pushed out to the edge of the box. Thus we can say that there is a transition in the density profile at $g=1$.} \label{fig:tr} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.32\linewidth]{denfuncsol_01b.png} \includegraphics[width=0.32\linewidth]{denfuncsol_1.png} \includegraphics[width=0.32\linewidth]{denfuncsol_50.png} \caption{Soliton profiles in the three regimes with $N=300$, $A=300$, $L=5$ via the brute force numerical data using Eq.~\ref{solone}. (Left) The case of $g<1$ (g=0.1) and $z_1=0.018356i$. (Middle) The case when g=1 and $z_2=0.078356i$. (Right) This case when $g>1$ (g=50) and $z_3=0.018356i$} \label{fig:trsol} \end{figure} Fig.~\ref{fig:trsol} shows the one soliton solution sitting on top of the background in all three cases, $g>1,g=1$ and $g<1$. The corresponding time evolution will be a solitonic excitation moving on top of such non-trivial backgrounds. \subsection{Analytic form of the density field and soliton solutions in Field Theory} As an equivalent form of the those dual equations we introduce two meromorphic fields $U^{+}(x)$ and $U^{-}(x)$ \cite{mainmanas}, \begin{eqnarray} \label{uplus} U^{+}(x)=\frac{ig}{L}\sum_{n=1}^{M}\coth\left(\frac{x-z_{n}}{L}\right) \end{eqnarray} \begin{eqnarray} \label{uminus} U^{-}(x)=-\frac{ig}{L}\sum_{j=1}^{N}\coth\left(\frac{x-x_{j}}{L}\right) \end{eqnarray} These equations are still not defined in the continuum limit. $U^{-}(x)$ has poles on the real axis only while $U^{+}(x)$ has poles which are not on the real axis. We have \begin{eqnarray} \label{uminusxj} U^{+}(x_{j})=p_{j}+\frac{ig}{L}\sum_{k\neq j,1}^{N}\coth\left(\frac{x_{j}-x_{k}}{L}\right) \end{eqnarray} We introduce the corresponding particle density field \begin{eqnarray} \rho(x)=\sum_{j=1}^{N}\delta(x-x_{j}) \end{eqnarray} Now we can represent Eq.~\ref{uminus} in the following parametric form, \begin{eqnarray} \label{cauchy} U^{-}(z)=-\frac{ig}{L}\int_{-\infty}^{\infty} \rho(x) \coth\Big(\frac{z-x}{L}\Big)dx \end{eqnarray} The above equation is independent of the exact number of particles. So this equation can be extended even to the continuum limit, where of course the form of the density function will be different. Eq.~\ref{cauchy} is discontinuous on the real axis, \begin{eqnarray} \lim_{\epsilon\rightarrow0}U^{-}(x_{0}-i\epsilon)&=&-\frac{ig}{L}\lim_{\epsilon\rightarrow0}\Bigg[\int_{-\infty}^{x{}_{0}-\epsilon}\rho(x)\coth\left(\frac{(x_{0}-i\epsilon)-x}{L}\right)dx\nonumber\\ &&+\int_{x_{0}+\epsilon}^{\infty}\rho(x)\coth\left(\frac{(x_{0}-i\epsilon)-x}{L}\right)dx\nonumber\\ &&+\int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)\coth\left(\frac{(x_{0}-i\epsilon)-x}{L}\right)dx\Bigg] \end{eqnarray} From the definition of principal value integral we have, \begin{eqnarray} \label{u(x-e)} \lim_{\epsilon\rightarrow0}U^{-}(x_{0}-i\epsilon)&=&-\frac{ig}{L}\lim_{\epsilon\rightarrow0}\Bigg[P\left\{ \int_{-\infty}^{\infty}\rho(x)\coth\left(\frac{z-x}{L}\right)dx\right\} \Bigg|_{z=(x_{0}-i\epsilon)}\nonumber\\ &&+\int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)\coth\left(\frac{(x_{0}-i\epsilon)-x}{L}\right)dx\Bigg] \end{eqnarray} We will now simplify the second expression on the r.h.s of the above equation. \\ Now we observe, \begin{eqnarray} \lim_{\epsilon\rightarrow0}\int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)&\coth&\left(\frac{x-(x_{0}-i\epsilon)}{L}\right)dx\nonumber\\ &&+\lim_{r\rightarrow0}\int_{c_{1}}\rho(z)\coth\left(\frac{z-(x_{0}-i\epsilon)}{L}\right)dz\nonumber\\ &&=\oint\rho(z)\coth\left(\frac{z-(x_{0}-i\epsilon)}{L}\right)dz \end{eqnarray} Here we choose a closed contour of semicircular shape of radius $r$ which closes in the upper half plane, consisting of curve $\mathcal{C}_1$ in Fig.~\ref{fig:contour_plot1} and the real axis joining the curve. We have shifted the singular point below the real axis and the contour closes in anticlockwise direction. Using residue theorem we have (see Appendix C). \begin{eqnarray} \label{resi-final} \oint\rho(z)\coth\left(\frac{z-(x_{0}-i\epsilon)}{L}\right)dz=0 \end{eqnarray} Therefore, \begin{eqnarray} \lim_{\epsilon\rightarrow0}\int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)\coth\left(\frac{(x_{0}-i\epsilon)-x}{L}\right)dx=i\pi L\rho(x_{0}) \end{eqnarray} So from Eq.~\ref{u(x-e)} we get the final expression as, \begin{eqnarray} \lim_{\epsilon\rightarrow0}U^{-}(x_{0}-i\epsilon)=-\frac{ig}{L}\left[-\pi L\rho(x_{0})^{\mathrm{H}}+i\pi L\rho(x_{0})\right] \end{eqnarray} Thus, we have, \begin{eqnarray} U^{-}(x-i0)=\pi g\rho(x)+\mathit{i \pi g\rho(x)^{\mathrm{H}}} \end{eqnarray} Similarly we can shift the singular point upward by $\epsilon$ and then do our calculation in which case the contour integral, Eq.~\ref{resi-final} will produce $2\pi iL\rho(x_{0})$. So, \begin{eqnarray} \lim_{\epsilon\rightarrow0} \int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)\coth\left(\frac{x-(x_{0}+i\epsilon)}{L}\right)dx+i\pi L\rho(x_{0})=2\pi iL\rho(x_{0}) \end{eqnarray} which implies \begin{eqnarray} \lim_{\epsilon\rightarrow0}\int_{x_{0}-\epsilon}^{x_{0}+\epsilon}\rho(x)\coth\left(\frac{(x_{0}+i\epsilon)-x}{L}\right)dx=-i\pi L\rho(x_{0}) \end{eqnarray} So, finally we get, \begin{eqnarray} U^{-}(x\mp i0)=\pm\pi g\rho(x)+\mathit{i\pi g\rho(x)^{\mathrm{H}}} \end{eqnarray} We need to find the equivalent form of $U^{+}(x)$ in the continuum limit. From Eq.~\ref{uminusxj} we have, \begin{eqnarray} \sum_{j=1}^{N}\delta(x-x_{j})U^{+}(x_{j})&=&\sum_{j=1}^{N}\delta(x-x_{j})p_{j}\nonumber\\ &&+\frac{ig}{L}\sum_{j=1}^{N}\sum_{k\neq j}^{N}\delta(x-x_{j})\coth\left(\frac{x_{j}-x_{k}}{L}\right) \end{eqnarray} \begin{eqnarray} \label{rhouplus} \rho(x)U^{+}(x)=\rho(x)v(x)+i\frac{g}{L}\rho(x)\sum_{k\neq j}^{N}\coth\left(\frac{x-x_{k}}{L}\right) \end{eqnarray} where, we define $v(x)$ such that, \begin{eqnarray} \rho(x)v(x)=\sum_{j=1}^{N}\dot{x}_{j}\delta(x_{i}-x_{j}) \end{eqnarray} and \begin{eqnarray} \sum_{j=1}^{N}\delta(x-x_{j})U^{+}(x_{j})=\rho(x)U^{+}(x) \end{eqnarray} Next, we will go to the continuum limit, i.e., $N\rightarrow\infty$. As in the case of Hamiltonian, the positions of individual variables are replaced by an equivalent position functions $x=x(j)$, where j represents the $j^{th}$ particle. It is related to the density field by Eq.~\ref{xprimej}. Therefore Eq.~\ref{rhouplus} takes the form, \begin{eqnarray} \rho(x)U^{+}(x)=\rho(x)v(x)+\frac{ig}{L}\lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\rho(x)\coth\left(\frac{x(j)-x(k)}{L}\right) \end{eqnarray} We can show that (see Appendix B), \begin{eqnarray} \label{HlimitU(x)} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\coth\left(\frac{x(j)-x(k)}{L}\right)=L\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi L\rho(x)^{\mathrm{H}} \end{eqnarray} where $\rho(x)^{\mathrm{H}}$ is defined in Eq.~\ref{hilbert}. Therefore, we have, \begin{eqnarray} \rho(x)U^{+}(x)=\rho(x)v(x)+i\frac{g}{L}\rho(x)\left(L\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi L\rho(x)^{\mathrm{H}}\right) \end{eqnarray} Thus dividing by $\rho(x)$ we get \begin{eqnarray} \label{fieldUplus} U^{+}(x)=v(x)+ig\left(\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi\rho(x)^{\mathrm{H}}\right) \end{eqnarray} As stated earlier the number of dual variables is independent of the number of real particles. So, under the continuum limit too we can find one soliton solutions. From Eq.~\ref{uplus} and Eq.~\ref{fieldUplus} for $M=1$ we get, \begin{eqnarray} \label{(U+fieldeqn)} \frac{ig}{L}\coth\left(\frac{x-z}{L}\right)=-ig\left(\pi\rho^{H}(x)-\partial_{x}\log\sqrt{\rho(x)}\right)+v(x) \end{eqnarray} Therefore, equating the imaginary part of the equation, we get, \begin{eqnarray} \label{ansatzcheck} g\left(\pi\rho^{H}-\partial_{x}log\sqrt{\rho(x)}\right)=-\frac{g}{2L}\left[\coth\left(\frac{x-z}{L}\right)+\coth\left(\frac{x+z}{L}\right)\right] \end{eqnarray} This equation is difficult to solve explicitly. We can guess a solution and then see whether it satisfies the above equation. We put forward the following ansatz for the density field for one soliton solution under the continuum limit which is, \begin{eqnarray} \rho(x)=\rho_o(x)+\frac{1}{2i\pi L}\left[\coth\left(\frac{x-{i\lambda}}{L}\right)-\coth\left(\frac{x+{i\lambda}}{L}\right)\right] \label{anzsol} \end{eqnarray} where $\lambda$ is some function of the position of the dual variable that can be found from Eq.~\ref{ansatzcheck} by using this ansatz. Note that, as input, we specify the value of the dual variable $z(t=0)=a+i b $. Since this analysis is witout an external trap, therefore due to translational invariance we can assume $a=0$ (without loss of generality). Our aim is to evaluate the L.H.S of Eq.~\ref{ansatzcheck} using the ansatz for $\rho(x)$. The Hilbert transform is solved in detail in Appendix C. It can be shown that \begin{eqnarray} \label{hilbert2} \rho(x)^{\mathrm{H}}=-\frac{1}{2\pi L}\left[\coth\left(\frac{x-{i\lambda}}{L}\right)+\coth\left(\frac{x+{i\lambda}}{L}\right)\right] \end{eqnarray} Now, the task is to solve the other part of L.H.S, $-\frac{g}{2}\frac{1}{\rho(x)}\rho'(x)$, Plugging in the ansatz (Eq.~\ref{anzsol}), we get, \begin{eqnarray} \frac{1}{2}\frac{\rho'(x)}{\rho(x)}=\frac{1}{\rho(x)}\left(-\frac{1}{{4i}\pi L^{2}}\right)\left[\coth^{2}\left(\frac{x-i\lambda}{L}\right)-\coth^{2}\left(\frac{x+i\lambda}{L}\right)\right] \end{eqnarray} which implies, \begin{eqnarray} g&&\Bigg(\pi \rho(x)^{\mathrm{H}}-\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}\Bigg)=\left(-\frac{g}{2L}\right)\Bigg[\coth\left(\frac{x-i\lambda}{L}\right)\nonumber\\ &&+\coth\left(\frac{x+i\lambda}{L}\right)\Bigg] \left\{ 1-\frac{\left(\frac{1}{\mathit{2i}\pi L}\right)\left[\coth\left(\frac{x-i\lambda}{L}\right)-\coth\left(\frac{x+i\lambda}{L}\right)\right]}{\rho_{o}+\frac{1}{\mathit{2i}\pi L}\left[\coth\left(\frac{x-i\lambda}{L}\right)-\coth\left(\frac{x+i\lambda}{L}\right)\right]}\right\} \end{eqnarray} This gives us, \begin{eqnarray} \label{64} g\Bigg(\pi\rho(x)^{\mathrm{H}}&&-\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}\Bigg)=\nonumber\\ &&\left(-\frac{g}{2L}\right)\left\{ \frac{2\sinh\left(\frac{2x}{L}\right)}{\cosh\left(\frac{2x}{L}\right)-\left[\cos\left(\frac{2\lambda}{L}\right)-\left(\frac{1}{\pi L\rho_{o}}\right)\sin\left(\frac{2\lambda}{L}\right)\right]}\right\} \end{eqnarray} In order to satisfy Eq.~\ref{ansatzcheck}, we must have \begin{eqnarray} \label{lambda -b relation} \cos\left(\frac{2\lambda}{L}\right)-\left(\frac{1}{\pi L\rho_{o}}\right)\sin\left(\frac{2\lambda}{L}\right)=\cos\left(\frac{2b}{L}\right)=\cosh\left(\frac{2ib}{L}\right) \end{eqnarray} The above equation gives the relation between $\lambda$ and b. This transcendental equation can be solved numerically to get the value of $\lambda$ for certain value of b. If $\left( \frac{2\lambda}{L}\right)$ and $\left(\frac{2b}{L}\right)$ is quite small, we can approximately solve the above equation to get a functional form of $\lambda$ which gives (by Taylor expansion of Trigonometric functions), \begin{eqnarray} \label{small-lambda-b relation} \lambda=\frac{1}{2\pi\rho_{o}}\left[\left(1+(2\pi b\rho_{o})^{2}\right)^{\frac{1}{2}}-1\right]\quad \mbox{for } \lambda,b \ll L \end{eqnarray} So, from Eq.~\ref{64} we get, \begin{eqnarray} g\Bigg(\pi\rho(x)^{\mathrm{H}}-\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}\Bigg)=\left(-\frac{g}{2L}\right)\left\{ \frac{\sinh\left(\frac{2x}{L}\right)}{\frac{1}{2}\left[\cosh\left(\frac{2x}{L}\right)-\cosh\left(\frac{2ib}{L}\right)\right]}\right\} \end{eqnarray} which implies, \begin{eqnarray} g\left(\pi\rho^{\mathrm{H}}-\partial_{x}\mathrm{ln}\sqrt{\rho(x)}\right)=-\frac{g}{2L}\left[\coth\left(\frac{x-ib}{L}\right)+\coth\left(\frac{x+ib}{L}\right)\right] \end{eqnarray} Therefore, we prove that that our ansatz was correct. Here, $b$ is given and $\lambda$ is computed by the transcendental Eq.~\ref{lambda -b relation}. \subsection{Velocity of the soliton.} In earlier sections, we had said that the dual variable drags the soliton with it and so the velocity of the soliton and the dual variable should be exactly the same. Earlier we provided an expression for the time period of the dual variable in the small $y$ limit. In this section, we will present an expression for the velocity of the soliton~\cite{stone}. This is the speed (denoted as $v_{soliton}$) at which the soliton travels. From continuity equation, one gets, \begin{eqnarray} \label{vsoliton} v_{soliton}=\frac{\rho}{\rho-\rho_o}v(x) \end{eqnarray} Now, we get $v(x)$ from real part of Eq.~(\ref{(U+fieldeqn)}). Therefore, \begin{eqnarray} v(x)=\frac{g}{2iL}\Bigg[\coth\left(\frac{x-ib}{L}\right)-\coth\left(\frac{x+ib}{L}\right)\Bigg] \end{eqnarray} Placing this in Eq.~(\ref{vsoliton}) we get, \begin{eqnarray} v_{soliton}=\left(\frac{g}{2iL}\right)&&\left(\frac{\rho_{o}+\frac{1}{\mathit{2i}\pi L}\Bigg[\coth\left(\frac{x-i\lambda}{L}\right)-\coth\left(\frac{x+i\lambda}{L}\right)\Bigg]}{\frac{1}{\mathit{2i}\pi L}\Bigg[\coth\left(\frac{x-i\lambda}{L}\right)-\coth\left(\frac{x+i\lambda}{L}\right)\Bigg]}\right)\cdot\nonumber\\ &&\hspace{1in}\times\left[\coth\left(\frac{x-ib}{L}\right)-\coth\left(\frac{x+ib}{L}\right)\right] \end{eqnarray} \begin{eqnarray} =\pi g\rho_{o}\frac{\sinh\left(\frac{2ib}{L}\right)}{\sinh\left(\frac{2i\lambda}{L}\right)}\left(\frac{\cosh\left(\frac{2x}{L}\right)-\bigg[\cosh\left(\frac{2i\lambda}{L}\right)-\frac{1}{i\rho_{o}\pi L}\sinh\left(\frac{2i\lambda}{L}\right)\bigg]}{\cosh\left(\frac{2x}{L}\right)-\cosh\left(\frac{2ib}{L}\right)}\right) \end{eqnarray} Using Eq.~\ref{lambda -b relation} we get \begin{eqnarray} v_{soliton}&& =\pi g\rho_{o}\frac{\sinh\left(\frac{2ib}{L}\right)}{\sinh\left(\frac{2i\lambda}{L}\right)}\left(\frac{\cosh\left(\frac{2x}{L}\right)-\cosh\left(\frac{2ib}{L}\right)}{\cosh\left(\frac{2x}{L}\right)-\cosh\left(\frac{2ib}{L}\right)}\right)\\ \nonumber\\ && =\pi g\rho_{o}\frac{\sin\left(\frac{2b}{L}\right)}{\sin\left(\frac{2\lambda}{L}\right)} \end{eqnarray} For small values of $\left(\frac{\lambda}{L}\right)$ and $\left(\frac{b}{L}\right)$, using Eq.~(\ref{small-lambda-b relation}) we get, \begin{eqnarray} v_{soliton}=\left(\frac{2gb\left(\pi^{2}\rho_{o}^{2}\right)}{\left[\left(1+\left(2\pi b\rho_{o}\right)^{2}\right)^{\frac{1}{2}}-1\right]}\right) \quad \mbox{for } \lambda,b \ll L \end{eqnarray} \section{Conclusions and Outlook} \label{sec:conclusion} To summarize, in this paper, we introduced the general form of HC model with an external confining box-like potential for which the system remains integrable. We showed that the box-like potential does not allow the particles to spread out even if their number increases. The typical length of the system scales as $\sinh^{-1}(\sqrt{N})$, hence logarithmically, as opposed to $\sqrt{N}$ in the Calogero-Moser system (rational). \\ \\ We formulated the effective dual system for this model. These dual variables move in the complex plane. We showed, although the first order equations are coupled, the second order equations get completely decoupled which finally gives us the equation of motions for the Calogero particles. We showed that the number of dual variables is independent of Calogero particles, i.e., the number of dual particles can be less than the number of Calogero particles. By specifying the number of dual particles, we restrict the space of initial conditions and this gives us the soliton solutions. We show that $M$ dual variables give us the $M$ soliton solution. \\ \\ We analysed the density profile corresponding to the background, (i.e. when the dual variable is not present) and analysed its dependence on the system parameters. We also found analytical expressions for the density profile and showed its similarity to a trigonometric version that appears in the context of Gross-Witten-Wadia phase transition in large-N gauge theories \cite{gross1993possible,wadia1980n}. We then formulated the initial conditions of position and corresponding momenta for one, two, three and four soliton solution using the damping equation. We analysed the dynamics of the particles and showed that the soliton formed does not break down as the system evolves. Thus we could show that they were indeed soliton solutions for the HC model. We showed that the height of solitons depends on the proximity of the dual variable from the real axis, i.e., larger the value of the imaginary part of the dual variable, shorter is the soliton. We also checked the effects of soliton collisions and quenching. We showed that quenching a parameter immediately breaks the soliton and generates ripples. \\ We explore the connection between the motion of dual variables in the complex plane and the motion of soliton. We showed that the dual variable essentially drags the soliton with it as it moves. We showed that the time period of one complete revolution for the two is equal. We solved the equation of motion of a single dual variable in the small $y$ limit and found an analytic form of the time period which matches very well with simulation. \\ Finally, we discuss the continuum limit. We formulated the Hamiltonian as a function of the density and velocity field. We formed an equivalent dual system using meromorphic fields and formulated their exact form in the continuum limit. Even in this limit, the soliton solutions can be found. We formed the correct first order equation for one soliton solution. We made an ansatz for the analytic form of density for one soliton solutions and proved it to be correct. Comparisons were also made with brute force numerical simulations. \\ There are several directions for future and we state some of them here. Soliton stability analysis for these models is of interest and requires further investigation. Another question of interest would be to investigate if the solutions of the finite set of equations for the background for finite number of particles correspond to zeros of known polynomials. This would be an interesting generalization of the Stieltjes problem \cite{shastry2001solution}. In the regime where the confined Hyperbolic model could be written in the Bogomolny (positive-definite) form, the HC model is deeply connected to a generalization of the Log-Gas \cite{forrester2010log} and such connections need \cite{CalogeroMosermodel} to be explored. One could also explore the possible relation of these hyperbolic models to random matrix theory \cite{bogomolny2009random}. \section{Acknowledgements} We would like to thank A. Polychronakos, A. Abanov, P. Wiegmann, E. Bettelheim, E. Bogomolny, S. Majumdar, D. Huse for useful discussions. M. K. gratefully acknowledges the Ramanujan Fellowship SB/S2/RJN-114/2016 from the Science and Engineering Research Board (SERB), Department of Science and Technology, Government of India. MK would like to acknowledge support from the project 6004-1 of the Indo-French Centre for the Promotion of Advanced Research (IFCPAR). We would like to thank the ICTS program on Integrable systems in Mathematics, Condensed Matter and Statistical Physics (Code: ICTS/integrability2018/07) for enabling valuable discussions with many participants. M. K. thanks the hospitality of the Department of Physics, Princeton University, USA where part of this work was done. M. K. thanks the hospitality of Laboratoire de Physique Th\'eorique, \'Ecole Normale Sup\'erieure, Paris where part of this work was done. \newpage \section{Appendix A: Equation of Motion from dual equations} In this section, we derive the equations of motion of the particles moving in the real axis and for the dual variables moving in the complex plane from the set of dual equations. We assume that there are N real particles and $M(<N)$ number of dual variables. So, we start from the dual equation. \begin{eqnarray} \label{1stderi} \dot{x_{i}}-i\frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)=-i\frac{g}{L}\sum_{j\neq i}^{N}&&\coth\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&+i\frac{g}{L}\sum_{n=1}^{M}\coth\left(\frac{x_{i}-z_{n}}{L}\right) \end{eqnarray} \begin{eqnarray} \dot{z_{n}}-i\frac{A}{L}\sinh\left(\frac{2z_{n}}{L}\right)=i\frac{g}{L}\sum_{m\neq n}^{M}&&\coth\left(\frac{z_{n}-z_{m}}{L}\right)\nonumber\\ &&-i\frac{g}{L}\sum_{i=1}^{N}\coth\left(\frac{z_{n}-x_{i}}{L}\right) \end{eqnarray} Taking the second derivative of the Eq.~\ref{1stderi} we get \begin{eqnarray} \ddot{x}_{i}=2i\frac{A}{L^{2}}\cosh\left(\frac{2x_{i}}{L}\right)\left(\frac{\dot{x}_{i}}{L}\right)&&+i\frac{g}{L^{2}}\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\left(\frac{\dot{x}_{i}-\dot{x}_{j}}{L}\right)\nonumber\\ &&-i\frac{g}{L^{2}}\sum_{n=1}^{M}\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\left(\frac{\dot{x}_{i}-\dot{z}_{n}}{L}\right) \end{eqnarray} Then substituting $\dot{x}_i$ and $\dot{z}_n$ using the dual equations we get \begin{eqnarray} \label{cumbersome-eom} \ddot{x}_{i}&&=2i\frac{A}{L^{2}}\cosh\left(\frac{2x_{i}}{L}\right)\Biggl\{i\frac{A}{L}\sinh\left(\frac{2x_{i}}{L}\right)-i\frac{g}{L}\sum_{j\neq i}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&+i\frac{g}{L}\sum_{n=1}^{M}\coth\left(\frac{x_{i}-z_{n}}{L}\right)\Biggr\}+i\frac{g}{L^{2}}\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&\Biggl\{ i\frac{A}{L}\left(\sinh\left(\frac{2x_{i}}{L}\right)-\sinh\left(\frac{2x_{j}}{L}\right)\right)-i\frac{g}{L}\Biggl[\sum_{a\neq i}^{N}\coth\left(\frac{x_{i}-x_{a}}{L}\right)\nonumber\\ &&-\sum_{b\neq j}^{N}\coth\left(\frac{x_{j}-x_{b}}{L}\right)\Biggl]+i\frac{g}{L}\sum_{n=1}^{M}\Biggl[\coth\left(\frac{x_{i}-z_{n}}{L}\right)\nonumber\\ &&-\coth\left(\frac{x_{j}-z_{n}}{L}\right)\Biggr]\Biggr\}\mathcal{-}i\frac{g}{L^{2}}\sum_{n=1}^{M}\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\Biggl[i\frac{A}{L}\Bigg(\sinh\left(\frac{2x_{i}}{L}\right)\nonumber\\ &&-\sinh\left(\frac{2z_{n}}{L}\right)\Bigg)-i\frac{g}{L}\Biggl\{\sum_{j\neq i}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right)+\sum_{m\neq n}^{M}\coth\left(\frac{z_{n}-z_{m}}{L}\right)\Biggr\}\nonumber\\ &&+i\frac{g}{L}\Biggl\{\sum_{m=1}^{M}\coth\left(\frac{x_{i}-z_{m}}{L}\right)-\sum_{j=1}^{N}\coth\left(\frac{z_{n}-x_{j}}{L}\right)\Biggr\}\Biggr] \end{eqnarray} Now we will break the above equation in parts/categories to solve and extract the equations of motion from it. First we get the following expression, \begin{eqnarray} T_0=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2x_{i}}{L}\right)\cosh\left(\frac{2x_{i}}{L}\right) \end{eqnarray} This term is already in a simplified form and is the first term in the equation of motion (Eq.~\ref{cumbersome-eom}). Then we simplify the following expression , \begin{eqnarray} T_1=2\cosh\left(\frac{2x_{i}}{L}\right)\sum_{j\neq i}^{N}\coth\left(\frac{x_{i}-x_{j}}{L}\right)&&-\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&\left(\sinh\left(\frac{2x_{i}}{L}\right)-\sinh\left(\frac{2x_{j}}{L}\right)\right)\nonumber \\ \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}\frac{1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)}\Biggl[2\cosh\left(\frac{2x_{i}}{L}\right)\cosh&&\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&-2\cosh\left(\frac{x_{i}+x_{j}}{L}\right)\Biggr] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}\frac{1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)}\left[\cosh\left(\frac{3x_{i}-x_{j}}{L}\right)-\cosh\left(\frac{x_{i}+x_{j}}{L}\right)\right] \end{eqnarray} \begin{eqnarray} =2\sum_{j\neq i}^{N}\sinh\left(\frac{2x_{i}}{L}\right) \end{eqnarray} Therefore, we get, \begin{eqnarray} T_{1}=2\left(N-1\right)\sinh\left(\frac{2x_{i}}{L}\right) \end{eqnarray} Similarly we get a term like, \begin{eqnarray} T_{2}=2\cosh\left(\frac{2x_{i}}{L}\right)\sum_{n=1}^{M}&&\coth\left(\frac{x_{i}-z_{n}}{L}\right)-\sum_{n=1}^{M}\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\nonumber\\ &&\left(\sinh\left(\frac{2x_{i}}{L}\right)-\sinh\left(\frac{2z_{n}}{L}\right)\right) \end{eqnarray} which yileds, \begin{eqnarray} T_{2}=2\sum_{n=1}^{M}\sinh\left(\frac{2x_{i}}{L}\right)=2M\sinh\left(\frac{2x_{i}}{L}\right) \end{eqnarray} Thus, $T_0$, $T_1$ and $T_2$ are the contribution from external potential. Next, we simplify the expressions which have contribution from the interaction potential. Thus the next term we simplify is , \begin{eqnarray} T_3=\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\Biggl[\sum_{a\neq i}^{N}\coth&&\left(\frac{x_{i}-x_{a}}{L}\right)\nonumber\\ &&-\sum_{b\neq j}^{N}\coth\left(\frac{x_{j}-x_{b}}{L}\right)\Biggr] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}2\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}&&+\sum_{j\neq i}^{N}\sum_{k\neq i,j}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&\left[\coth\left(\frac{x_{i}-x_{k}}{L}\right)-\coth\left(\frac{x_{j}-x_{k}}{L}\right)\right] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}2\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}+\sum_{j\neq i}^{N}\sum_{k\neq i,j}^{N}\mathrm{csch}^{2}&&\left(\frac{x_{i}-x_{j}}{L}\right)\nonumber\\ &&\left[\frac{\sinh\left(\frac{x_{j}-x_{k}-x_{i}-x_{k}}{L}\right)}{\sinh\left(\frac{x_{i}-x_{k}}{L}\right)\sinh\left(\frac{x_{j}-x_{k}}{L}\right)}\right] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}2&&\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}\nonumber\\ &&+\sum_{j\neq i}^{N}\sum_{k\neq i,j}^{N}\left[\frac{-1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)\sinh\left(\frac{x_{i}-x_{k}}{L}\right)\sinh\left(\frac{x_{j}-x_{k}}{L}\right)}\right] \end{eqnarray} Now \begin{eqnarray} \sum_{j\neq i}^{N}\sum_{k\neq i,j}^{N}\left(\frac{-1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)\sinh\left(\frac{x_{i}-x_{k}}{L}\right)\sinh\left(\frac{x_{j}-x_{k}}{L}\right)}\right)=0 \end{eqnarray} This is because the individual terms in the summation cancels each other. ($\sinh$ is an odd function) \begin{eqnarray} T_{3}&&=\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\Biggl[\sum_{a\neq i}^{N}\coth\left(\frac{x_{i}-x_{a}}{L}\right)-\sum_{b\neq j}^{N}\coth\left(\frac{x_{j}-x_{b}}{L}\right)\Biggr]\nonumber\\ &&=\sum_{j\neq i}^{N}2\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)} \end{eqnarray} The next term is, \begin{eqnarray} T_{4}&&=-\sum_{j\neq i}^{N}\mathrm{csch}^{2}\left(\frac{x_{i}-x_{j}}{L}\right)\sum_{n=1}^{M}\left[\coth\left(\frac{x_{i}-z_{n}}{L}\right)-\coth\left(\frac{x_{i}-z_{n}}{L}\right)\right]\nonumber\\ &&+\sum_{n=1}^{M}\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\sum_{j\neq i}^{N}\left[\coth\left(\frac{z_{n}-x_{j}}{L}\right)-\coth\left(\frac{x_{i}-x_{j}}{L}\right)\right] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}\sum_{n=1}^{M}\Biggl[-\mathrm{csch}^{2}&&\left(\frac{x_{i}-x_{j}}{L}\right)\frac{\sinh\left(\frac{x_{j}-z_{n}-x_{i}+z_{n}}{L}\right)}{\sinh\left(\frac{x_{i}-z_{n}}{L}\right)\sinh\left(\frac{x_{j}-z_{n}}{L}\right)}\nonumber\\ &&+\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\frac{\sinh\left(\frac{x_{i}-x_{j}-z_{n}+x_{j}}{L}\right)}{\sinh\left(\frac{z_{n}-x_{j}}{L}\right)\sinh\left(\frac{x_{i}-x_{j}}{L}\right)}\Biggr] \end{eqnarray} \begin{eqnarray} =\sum_{j\neq i}^{N}\sum_{n=1}^{M}\Biggl[&&\frac{1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)\sinh\left(\frac{x_{i}-z_{n}}{L}\right)\sinh\left(\frac{x_{j}-z_{n}}{L}\right)}\nonumber\\ &&-\frac{1}{\sinh\left(\frac{x_{i}-x_{j}}{L}\right)\sinh\left(\frac{x_{i}-z_{n}}{L}\right)\sinh\left(\frac{x_{j}-z_{n}}{L}\right)}\Biggr]=0 \end{eqnarray} At last we simplify the following term, \begin{eqnarray} T_{5}=\sum_{n=1}^{M}\sum_{m\neq n}^{M}\Biggl[-\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)&&\Biggl\{ \coth\left(\frac{z_{n}-z_{m}}{L}\right)\nonumber\\ &&-\coth\left(\frac{x_{i}-z_{m}}{L}\right)\Biggr\} \Biggr] \end{eqnarray} \begin{eqnarray} =\sum_{n=1}^{M}\sum_{m\neq n}^{M}\Biggl[-\mathrm{csch}^{2}\left(\frac{x_{i}-z_{n}}{L}\right)\left\{ \frac{\sinh\left(\frac{x_{i}-z_{m}-z_{n}+z_{m}}{L}\right)}{\sinh\left(\frac{z_{n}-z_{m}}{L}\right)\sinh\left(\frac{x_{i}-z_{n}}{L}\right)}\right\} \Biggr] \end{eqnarray} which gives us, \begin{eqnarray} T_{5}=\sum_{n=1}^{M}\sum_{m\neq n}^{M}\left[\frac{-1}{\sinh\left(\frac{x_{i}-z_{n}}{L}\right)\sinh\left(\frac{z_{n}-z_{m}}{L}\right)\sinh\left(\frac{x_{i}-z_{n}}{L}\right)}\right]=0 \end{eqnarray} Now we have all the simplified terms of the Eq.~\ref{cumbersome-eom}. Summing $T_0$ to $T_5$ we get, \begin{eqnarray} \ddot{x_{i}}&&=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2x_{i}}{L}\right)\cosh\left(\frac{2x_{i}}{L}\right)+\frac{2Ag}{L^{3}}(N-M-1)\sinh\left(\frac{2x_{i}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\sum_{j\neq i}^{N}\left(\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}\right) \end{eqnarray} This entire process can be applied for finding the equation of motion for the dual variables. There will be a change from the contribution of external potential as the corresponding summation limits will change as there are M dual variables instead of N. Nevertheless the forms are quite similar and after all the algebra we get, \begin{eqnarray} \ddot{z_{n}}&&=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2z_{n}}{L}\right)\cosh\left(\frac{2z_{n}}{L}\right)+\frac{2Ag}{L^{3}}(N-M+1)\sinh\left(\frac{2z_{n}}{L}\right)\nonumber\\ &&+\frac{2g^{2}}{L^{3}}\sum_{m\neq n}^{M}\left(\frac{\cosh\left(\frac{z_{n}-z_{m}}{L}\right)}{\sinh^{3}\left(\frac{z_{n}-z_{m}}{L}\right)}\right) \end{eqnarray} \section{Appendix B: Field Theory} \subsection{Part 1: Formation of $U^+(x)$ in the continuum limit.} For large number of particles we can define a smooth continuous position function $x(s)$ such that $x(j)=x_j$. For $N\rightarrow\infty$ the position function becomes unique \cite{polymanas17} and is related to the density of the system as, \begin{eqnarray} x'(j)=\frac{dx}{dj}=\frac{1}{\rho(x)} \end{eqnarray} We start from Eq.~\ref{HlimitU(x)}, \begin{eqnarray} \label{hysum1} \rho(x)U^{+}(x)=\rho(x)v(x)+\frac{ig}{L}\lim_{N\rightarrow\infty}\sum_{ k\neq j\atop k=-N}^{N}\rho(x)\coth\left(\frac{x(j)-x(k)}{L}\right) \end{eqnarray} The function $\coth \big( \frac{x-a }{L}\big)$ has a simple pole at $x=a$. We can see this in the Laurent expansion of $\coth \big( \frac{x-a}{L} \big)$. \begin{eqnarray} \coth\left(\frac{x-x_{k}}{L}\right)=\frac{1}{\left(\frac{x-x_{k}}{L}\right)}+\frac{\left(\frac{x-x_{k}}{L}\right)}{3}-\frac{\left(\frac{x-x_{k}}{L}\right)^{3}}{45}+\mathcal{O}\left[x^{5}\right] \end{eqnarray} Therefore, we can write the Eq.~\ref{hysum1} as, \begin{eqnarray} \label{hysum2} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\coth\left(\frac{x(j)-x(k)}{L}\right)=\sum_{k=-\infty}^{\infty}f(k)-\lim_{k\rightarrow j}f(k) \end{eqnarray} where $f(k)$ is \begin{eqnarray} \label{f(k)} f(k)=\coth\left(\frac{x(j)-x(k)}{L}\right)+\frac{L}{x'(j)(k-j)} \end{eqnarray} It is important to note that the above treatment is only valid for $N\rightarrow\infty$ because, only then, for any $j$, we will have, \begin{eqnarray} \sum_{k=-\infty}^{\infty}\frac{L}{x'(j)(k-j)}=0 \end{eqnarray} The function $f(k)$ is so chosen such that the limit at $k\rightarrow j$ exists. The limiting value can be found out in the following way, \begin{eqnarray} \lim_{k\rightarrow j}f(k)=\lim_{k\rightarrow j}\Biggl[\frac{1}{\left(\frac{x(j)-x(k)}{L}\right)}&+&\frac{\left(\frac{x(j)-x(k)}{L}\right)}{3}-\frac{\left(\frac{x(j)-x(k)}{L}\right)^{3}}{45}\nonumber\\ &+&\mathcal{O}\left[x^{5}\right]+\frac{L}{x'(j)(k-j)}\Biggr] \end{eqnarray} The only non-trivial terms remaining are, \begin{eqnarray} \lim_{k\rightarrow j}f(k)=\lim_{k\rightarrow j}\left[\frac{L}{\left(x(j)-x(k)\right)}+\frac{L}{x'(j)(k-j)}\right] \end{eqnarray} From the Taylor expansion of $x(k)$ we get, \begin{eqnarray} \lim_{k\rightarrow j}f(k)=L\frac{x''(j)}{2\left[x'(j)\right]^{2}} \end{eqnarray} Therefore from Eq.~\ref{hysum2} \begin{eqnarray} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\coth\left(\frac{x(j)-x(k)}{L}\right)=\int_{-\infty}^{\infty}f(\nu)d\nu-L\frac{x''(j)}{2\left[x'(j)\right]^{2}} \end{eqnarray} Now we will solve the integral in the above equation. \begin{eqnarray} \int_{-\infty}^{\infty}f(\nu)d\nu=\int_{-\infty}^{j-\epsilon}f(\nu)d\nu+\int_{j+\epsilon}^{\infty}f(\nu)d\nu+\lim_{\epsilon\rightarrow0}\int_{j-\epsilon}^{j+\epsilon}f(j)d\nu \end{eqnarray} As the limiting value of $f(k)$ as $k\rightarrow j$ is finite we have, \begin{eqnarray} \lim_{\epsilon\rightarrow0}\int_{j-\epsilon}^{j+\epsilon}f(j)d\nu=0 \end{eqnarray} Therefore we can convert the above integral into a principal value integral. By doing so, we can replace $f(k)$ by $\coth\left(\frac{x(j)-x(\nu)}{L}\right)$ as the other part contributes $0$ to the principal value integral. \begin{eqnarray} \int_{-\infty}^{j-\epsilon}f(\nu)d\nu+\int_{j+\epsilon}^{\infty}f(\nu)d\nu&&=P\left\{ \int_{-\infty}^{\infty}f(\nu)d\nu\right\}\nonumber\\ && =P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{x(j)-x(\nu)}{L}\right)d\nu\right\} \end{eqnarray} Let $x(\nu)=\tau$ .Therefore $d\nu=\rho(\tau)d\tau$. Hence \begin{eqnarray} P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{x(j)-x(\nu)}{L}\right)d\nu\right\} &&=P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{x-\tau}{L}\right)\right\} \rho(\tau)d\tau\nonumber\\ &&=-\pi L\rho(x)^{\mathrm{H}} \end{eqnarray} The next task is to find the limiting value in terms of the density field. We know $x'(j)=\frac{1}{\rho(x)}$. Therefore \begin{eqnarray} x''(j =-\frac{\rho'(x)}{\left[\rho(x)\right]^{3}} \end{eqnarray} This implies, \begin{eqnarray} L\frac{x''(j)}{2\left[x'(j)\right]^{2} =-L\frac{\partial}{\partial x}\ln\sqrt{\rho(x)} \end{eqnarray} Therefore we finally get \begin{eqnarray} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\coth\left(\frac{x(j)-x(k)}{L}\right)=L\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi L\rho(x)^{\mathrm{H}} \end{eqnarray} Hence from Eq.~\ref{hysum1} we get \begin{eqnarray} \rho(x)U^{+}(x)=\rho(x)v(x)+i\frac{g}{L}\rho(x)\left(L\frac{\partial}{\partial x}\ln\sqrt{\rho(x)}-\pi L\rho(x)^{\mathrm{H}}\right) \end{eqnarray} \subsection{Part 2: Formulation of the Hamiltonian} We begin with the interaction potential in the Hamiltonian for limited number of particles. From that we will find the equivalent form under the continuum limit using density fields. The interaction potential as we know is of the form \begin{eqnarray} V_{int}=\sum_{j=1}^{N}\sum_{k\neq j\atop k=1}^{N}\frac{g^{2}}{2L^{2}}\left(\frac{1}{\sinh^{2}\left(\frac{x_j-x_k}{L}\right)}\right) \end{eqnarray} Under continuum limit the positions gets replaced by position functions as before and also the summation ranges from $-\infty$ to $\infty$ as $N\rightarrow\infty$. Hence, $V_{int}$ becomes \begin{eqnarray} \label{Actualvint} V_{int}=\lim_{N\rightarrow\infty}\sum_{j=-N}^{N}\sum_{k\neq j\atop k=-N}^{N}\frac{g^{2}}{2L^{2}}\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right) \end{eqnarray} We first attempt to find the result of the following summation \begin{eqnarray} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\frac{g^{2}}{2L^{2}}\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right)\nonumber \end{eqnarray} From the Taylor expansion of $\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right)$, we can show that it has a pole of second order, \begin{eqnarray} \frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}=\frac{1}{\left(\frac{x(j)-x(k)}{L}\right)^{2}}&&-\frac{1}{3}+\frac{\left(\frac{x(j)-x(k)}{L}\right)^{2}}{15}\nonumber\\ &&+\mathcal{O}\left[\left(\frac{x(j)-x(k)}{L}\right)^{4}\right] \end{eqnarray} As before we define a different function $g(k)$ which can be related to our required sum. The function $g(k)$ is not singular and hence is easier to work with. We define g(k) as, \begin{eqnarray} g(k)=\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}-\frac{L^{2}}{\left[x'(j)\right]^{2}(k-j)^{2}}+\frac{x''(j)}{\left[x'(j)\right]^{3}}\frac{L^{2}}{(k-j)} \end{eqnarray} So, we can write, \begin{eqnarray} \label{vint} \lim_{N\rightarrow\infty}\sum_{k\neq j\atop k=-N}^{N}\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right)=&&\sum_{k=-\infty}^{\infty}g(k)\nonumber\\ &&-\lim_{k\rightarrow j}g(k)+\frac{\pi^{2}}{3}\frac{L^{2}}{\left[x'(j)\right]^{2}} \end{eqnarray} It is important to note the origin of the last term of the above equation. Unlike Eq.~\ref{f(k)}, here in $g(k)$ there is an even term as well. The odd term still contributes $0$ to the summation but the even term is not equal to $0$ when we sum it over from $\infty$ to $-\infty$. Instead we have \begin{eqnarray} \sum_{k=-\infty}^{\infty}\frac{L^{2}}{\left[x'(j)\right]^{2}(k-j)^{2}}=\frac{L^{2}}{\left[x'(j)\right]^{2}}\sum_{k=-\infty}^{\infty}\frac{1}{(k-j)^2} \end{eqnarray} Replacing $(k-j)=s$ and since the summation is from $\infty$ to $-\infty$, we can write the above summation as \cite{stone}, \begin{eqnarray} \frac{L^{2}}{\left[x'(j)\right]^{2}}\sum_{k=-\infty}^{\infty}\frac{1}{(k-j)^2}=\frac{L^{2}}{\left[x'(j)\right]^{2}}\sum_{s=-\infty}^{\infty}\frac{1}{s^2}&& =\frac{\pi^{2}}{3}\frac{L^{2}}{\left[x'(j)\right]^{2}} \end{eqnarray} So it was required to add this term in Eq.~\ref{vint} so as that we can replace $g(k)$ properly. We will now find the limiting value of the function $g(k)$ as $k\rightarrow j$ in the same way as was done in Part 1 above. \begin{eqnarray} \lim_{k\rightarrow j}&&g(k)=\lim_{k\rightarrow j}\Biggl[\frac{1}{\left(\frac{x(j)-x(k)}{L}\right)^{2}}-\frac{1}{3}+\frac{\left(\frac{x(j)-x(k)}{L}\right)^{2}}{15}\nonumber\\ &&+\mathcal{O}\left[\left(\frac{x(j)-x(k)}{L}\right)^{4}\right]-\frac{L^{2}}{\left[x'(j)\right]^{2}(k-j)^{2}}+\frac{x''(j)}{\left[x'(j)\right]^{3}}\frac{L^{2}}{(k-j)}\Biggr] \end{eqnarray} The non-trivial terms left are, \begin{eqnarray} \lim_{k\rightarrow j}g(k)=\lim_{k\rightarrow j}\Biggl[\frac{1}{\left(\frac{x(j)-x(k)}{L}\right)^{2}}&&-\frac{L^{2}}{\left[x'(j)\right]^{2}(k-j)^{2}}\nonumber\\ &&+\frac{x''(j)}{\left[x'(j)\right]^{3}}\frac{L^{2}}{(k-j)}\Biggr] \end{eqnarray} After doing some algebra we get, \begin{eqnarray} \label{limitgx} \lim_{k\rightarrow j}g(k)&&=L^{2}\left[\frac{3}{4}\frac{\left[x''(j)\right]^{2}}{\left[x'(j)\right]^{4}}-\frac{1}{3}\frac{x'''(j)}{\left[x'(j)\right]^{3}}\right]\nonumber\\ &&=L^{2}\left[-\frac{1}{4}\frac{\left[x''(j)\right]^{2}}{\left[x'(j)\right]^{4}}-\frac{1}{3}\frac{d}{ds}\left(\frac{x''(s)}{\left[x'(s)\right]^{3}}\right)\Bigg|_{s=j}\right] \end{eqnarray} Now we need to compute $\sum_{k=-\infty}^{\infty}g(k)$. We follow the same convention as was used in the calculation of $\sum_{k=-\infty}^{\infty}f(k)$ in part 1. Here also we can exclude the part of $g(k)$ which makes it finite as $k\rightarrow j$ provided we take the principal value of the integral. \begin{eqnarray} \sum_{k=-\infty}^{\infty}g(k)&&=-\frac{\partial}{\partial x}P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{x(j)-\tau}{L}\right)\rho(\tau)d\tau\right\ \end{eqnarray} Therefore, we have, \begin{eqnarray} \label{integralgx} \sum_{k=-\infty}^{\infty}g(k)=\pi L\frac{\partial}{\partial x}\left[\rho(x(j))^{\mathrm{H}}\right] \end{eqnarray} After finding this we go back to Eq.~\ref{Actualvint}. \begin{eqnarray} \lim_{N\rightarrow\infty}\sum_{j=-N}^{N}\sum_{k\neq j\atop k=-N}^{N}&&\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right)\nonumber\\ &&=\int_{-\infty}^{\infty}dj\sum_{k\neq j\atop k=-N}^{N}\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right) \end{eqnarray} Using Eq.~\ref{vint}, Eq.~\ref{integralgx} and Eq.~\ref{limitgx} we have, \begin{eqnarray} I&&=\lim_{N\rightarrow\infty}\sum_{j=-N}^{N}\sum_{k\neq j\atop k=-N}^{N}\left(\frac{1}{\sinh^{2}\left(\frac{x(j)-x(k)}{L}\right)}\right)\nonumber\\ &&=\int_{-\infty}^{\infty}\Bigg[\pi L^{2}\frac{\partial}{\partial x}\left[\rho\bigg(x(j)\bigg){}^{\mathrm{H}}\right] -L^{2}\Bigg\{-\frac{1}{4}\frac{\left[\rho'(x(j))\right]^{2}}{\left[\rho(x(j))\right]^{2}}\nonumber\\ &&\hspace{1.5in}-\frac{1}{3}\frac{d}{ds}\left(\frac{x''(s)}{\left[x'(s)\right]^{3}}\right)\Bigg|_{s=j}\Bigg\} +L^{2}\frac{\pi^{2}}{3}\rho^{2}\bigg(x(j)\bigg)\Bigg]dj \end{eqnarray} Now from $x'(j)=\frac{1}{\rho(x)}$, we have $dj=\rho(x)dx$. Therefore, \begin{eqnarray} I=\int_{-\infty}^{\infty}\Biggl[\pi L^{2}\frac{\partial}{\partial x}\left[\rho(x){}^{\mathrm{H}}\right]+L^{2}\frac{1}{4}\frac{\left[\rho'(x)\right]^{2}}{\left[\rho(x)\right]^{2}}+L^{2}\frac{\pi^{2}}{3}\left[\rho(x)\right]^{2}\Biggr]\rho(x)dx \end{eqnarray} \begin{eqnarray} \mbox{}\hspace{1.0cm}=\int_{-\infty}^{\infty}\Biggl[\pi\frac{g^{2}}{2}\rho(x)\frac{\partial}{\partial x}\left[\rho(x){}^{\mathrm{H}}\right]+\frac{g^{2}}{8}\frac{\left[\rho'(x)\right]^{2}}{\rho(x)}+\frac{\pi^{2}g^{2}}{6}\left[\rho(x)\right]^{3}\Biggr]dx \end{eqnarray} Therefore the Hamiltonian under the assumption of continuum limit becomes, \begin{eqnarray} {\cal H}=\int_{-\infty}^{\infty}dx\left[\frac{1}{2}\rho v^{2}+\frac{\pi g^{2}}{2}\rho\frac{\partial}{\partial x}\rho{}^{\mathrm{H}}+\frac{g^{2}}{8}\frac{\left[\rho'\right]^{2}}{\rho}+\frac{\pi^{2}g^{2}}{6}\rho^{3}\right] \end{eqnarray} The $\frac{1}{2}\rho v^{2}$ term comes from the kinetic energy part which is self-evident.\\ The above equation can be equivalently written in the form \cite{abanov09}, \begin{eqnarray} {\cal H}=\int dx\rho(x)\left[\frac{v^{2}}{2}+\frac{1}{2}\left(\pi g\rho^{H}-g\partial_{x}log\sqrt{\rho(x)}\right)^{2}\right]+const \end{eqnarray} \section{Appendix C: Hilbert transform} We will first explain the Hilbert transform in general using a kernel $K\left(\frac{\tau-x}{L}\right)$ which is singular along the real axis. We use a general function $f(z)$ whose singular points are not in the real axis and the poles are simple. Therefore we define the Hilbert transform of the function $f(z)$ as, \begin{eqnarray} f(x)^{\mathrm{H}}=\frac{1}{\pi L}P\left\{ \int_{-\infty}^{\infty}f(\tau)K\left(\frac{\tau-x}{L}\right)d\tau\right\} \end{eqnarray} where $P$ stands for the principal value of the integral. Since the integrand has poles along the real line the integral is valid only in the principal value sense. The principal value of the integral is defined as, \begin{eqnarray} P\left\{ \int_{-\infty}^{\infty}f(\tau)K\left(\frac{\tau-x}{L}\right)d\tau\right\} =\lim_{\epsilon\rightarrow0}&&\Biggl[\int_{-\infty}^{x-\epsilon}f(\tau)K\left(\frac{\tau-x}{L}\right)d\tau\nonumber\\ &&+\int_{x+\epsilon}^{\infty}f(\tau)K\left(\frac{\tau-x}{L}\right)d\tau\Biggr] \end{eqnarray} Now let $g(\tau)=f(\tau)K\left(\frac{\tau-x}{L}\right)$. Therefore, \begin{eqnarray} \int_{-\infty}^{\infty}g(\tau)d\tau=P\left\{ \int_{-\infty}^{\infty}g(\tau)d\tau\right\} +\lim_{r\rightarrow0}\int_{c_{1}}g(\tau)d\tau \end{eqnarray} where $r$ is the radius of the contour $c_1$. Now since $g(\tau)$ has a simple pole at $\tau=x$, the Laurent series expansion of $g(\tau)$ is, \begin{eqnarray} g(\tau)=\frac{a_{-1}}{(\tau-x)}+a_{0}+a_{1}(\tau-x)+\cdots\cdots \end{eqnarray} We will first solve the integral along the $c_1$ contour. Let $(\tau-x)=re^{i\theta}$. Therefore $d\tau=ire^{i\theta}d\theta$. \\ \begin{eqnarray} \lim_{r\rightarrow0}\int_{c_{1}}g(\tau)d\tau=\lim_{r\rightarrow0}\int_{\pi}^{0}d\theta ire^{i \theta}\left[\frac{a_{-1}}{re^{i\theta}}+a_{0}+a_{1}re^{i\theta}+.......\right] \end{eqnarray} \begin{eqnarray} \label{I_over} \lim_{r\rightarrow0}\int_{c_{1}}g(\tau)d\tau=ia_{-1}\int_{\pi}^{0}d\theta=-i\pi a_{-1} \end{eqnarray} We use residue theorem to calculate the principal value. To use the residue theorem, we need a closed contour. But, the Hilbert transform is defined on the real axis from $-\infty$ to $\infty$. Therefore we need to choose a closed contour in which integral from other parts of the contour will be equal to $0$. We choose such a contour in the upper half plane as shown in the Fig.~\ref{fig:contour_plot1}. The radius of contour $c_2$ is $R$. We evaluate the integral in limit $R\rightarrow\infty$. Since $g(\tau)$ decays faster than than $\tau$ as $R\rightarrow\infty$ we have, \begin{eqnarray} \lim_{R\rightarrow\infty}\int_{c_{2}}g(\tau)d\tau=0 \end{eqnarray} We have also checked this to be true via a rigorous evaluation of the integral for our case. Therefore this allows us to write the principal value integral as a contour integral over a closed contour. \begin{eqnarray} \lim_{\epsilon\rightarrow 0}\int_{-\infty}^{x-\epsilon}g(\tau)d\tau+\lim_{\epsilon\rightarrow 0}\int_{x+\epsilon}^{\infty}&&g(\tau)d\tau+\lim_{r\rightarrow0}\int_{c_{1}}g(\tau)d\tau\nonumber\\ &&+\lim_{R\rightarrow\infty}\int_{c_{2}}g(\tau)d\tau=\oint g(\tau)d\tau \end{eqnarray} \begin{eqnarray} \label{PV_int} \lim_{\epsilon\rightarrow 0}\int_{-\infty}^{x-\epsilon}g(\tau)d\tau+\lim_{\epsilon\rightarrow 0}&&\int_{x+\epsilon}^{\infty}g(\tau)d\tau\nonumber\\ &&=\oint g(\tau)d\tau-\lim_{r\rightarrow0}\int_{c_{1}}g(\tau)d\tau \end{eqnarray} Now from residue theorem, \begin{eqnarray} \label{residue_1} \oint g(\tau)d\tau=2\pi i\,\sum_{i}Res\left[g(\tau),z_{i}\right] \end{eqnarray} where $z_i's$ are the poles of $g(\tau)$ inside the closed contour, $\mathrm{Im}\left(z_{i}\right)\neq0$. Therefore using Eq.~\ref{I_over}, Eq.~\ref{PV_int} and Eq.~\ref{residue_1} we get \begin{eqnarray} \lim_{\epsilon\rightarrow 0}\int_{-\infty}^{x-\epsilon}g(\tau)d\tau+\lim_{\epsilon\rightarrow 0}&&\int_{x+\epsilon}^{\infty}g(\tau)d\tau=\nonumber\\ &&2\pi i\,\sum_{i}Res\left[g(\tau),z_{i}\right]+i\pi a_{-1} \end{eqnarray} \begin{eqnarray} \label{contour_final} P\left\{ \int_{-\infty}^{\infty}g(\tau)d\tau\right\}=2\pi i\,\sum_{i}Res\left[g(\tau),z_{i}\right]+i\pi a_{-1} \end{eqnarray} In the present case we have , \begin{eqnarray} \label{hilbert1} \rho(x)^{\mathrm{H}}=\frac{1}{\pi L}P\left\{ \int_{-\infty}^{\infty}\rho(\tau)\coth\left(\frac{\tau-x}{L}\right)d\tau\right\} \end{eqnarray} and \begin{eqnarray} \rho(x)=\rho_o+\frac{1}{2i\pi L}\left[\coth\left(\frac{x-i\lambda}{L}\right)-\coth\left(\frac{x+i\lambda}{L}\right)\right] \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{hilbert3.png} \caption{The contour used for solving the Hilbert transform. The contour is in the $\tau$ plane. We performed the integration going in the anticlockwise direction. Blue points denote the poles $\tau=i(\lambda+nL\pi)$ and red points denote poles $\tau=x+inL\pi,$$\;\;$$n\epsilon I^+$. Note that the pole $-i\lambda$ lies outside the chosen contour.} \label{fig:contour_plot1} \end{figure} Now the Hilbert transform of $\rho_o$ is, \begin{eqnarray} \rho_{o}^{\mathrm{H}}=\frac{1}{\pi L}P\left\{ \int_{-\infty}^{\infty}\rho_{o}\coth\left(\frac{\tau-x}{L}\right)d\tau\right\} \end{eqnarray} Now changing the variable $\left(\frac{\tau-x}{L}\right)=z$ we get, \begin{eqnarray} \rho_{o}^{\mathrm{H}}=\frac{1}{\pi}P\left\{ \int_{-\infty}^{\infty}\rho_{o}\coth(z)dz\right\}=0 \end{eqnarray} as $\coth (z)$ is an odd function of z. Now the function $\coth\left(\frac{x-a}{L}\right)$ has a simple pole at $x=a$, i.e. on the real axis. We can see this in the Laurent expansion of $\coth \big(\frac{x-a}{L} \big)$. \begin{eqnarray} \coth\left(\frac{x-x_{k}}{L}\right)=\frac{1}{\left(\frac{x-x_{k}}{L}\right)}+\frac{\left(\frac{x-x_{k}}{L}\right)}{3}-\frac{\left(\frac{x-x_{k}}{L}\right)^{3}}{45}+\mathcal{O}\left[x^{5}\right] \end{eqnarray} The function $\coth\left(\frac{x-x_{k}}{L}\right)$ also has simple poles inside contour at $x=x_{k}+inL\pi$, where $(n\;\epsilon\;I^+)$. From Eq.~\ref{hilbert1} we get \begin{eqnarray} \rho(x)^{\mathrm{H}}=\frac{1}{2i\pi^{2}L^{2}}&&\Biggl[P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{\tau-i\lambda}{L}\right)\coth\left(\frac{\tau-x}{L}\right)d\tau\right\}\nonumber\\ && -P\left\{ \int_{-\infty}^{\infty}\coth\left(\frac{\tau+i\lambda}{L}\right)\coth\left(\frac{\tau-x}{L}\right)d\tau\right\}\Biggr] \end{eqnarray} Now, we have two separate Hilbert transform terms. For the first one the poles are at $\tau=x$ and $\tau=i\lambda$. For the second transform the poles are at $\tau=x$ and $\tau=-i\lambda$. The point $(-i\lambda)$ lies outside the contour. Hence its contribution to the residue will be 0. For the two separate integral we have. \begin{eqnarray} a_{-1}=L\coth\left(\frac{x-i\lambda}{L}\right) \textrm{\hspace{0.5in}and\hspace{0.5in}} a_{-1}=L\coth\left(\frac{x+i\lambda}{L}\right) \end{eqnarray} respectively\\ Thus using equation Eq.~\ref{contour_final} we get \begin{eqnarray} \rho(x)^{\mathrm{H}}=\frac{1}{2i\pi^{2}L^{2}}&&\Biggl[\left\{ i\pi L\coth\left(\frac{x-i\lambda}{L}\right)-2i\pi L\coth\left(\frac{x-i\lambda}{L}\right)\right\}\nonumber\\ && -\left\{ i\pi L\coth\left(\frac{x+i\lambda}{L}\right)\right\}\Biggr] \end{eqnarray} \begin{eqnarray} \rho(x)^{\mathrm{H}}=-\frac{1}{2\pi L}\left[\coth\left(\frac{x-i\lambda}{L}\right)+\coth\left(\frac{x+i\lambda}{L}\right)\right] \label{rhresult} \end{eqnarray} Now for poles at $\tau=i(\lambda+nL\pi)$ and at $\tau=x+inL\pi$ (since these are Hyperbolic functions), for each n, we have the following situaton. For the term $\coth\left(\frac{\tau-i\lambda}{L}\right)\coth\left(\frac{\tau-x}{L}\right)$ the contribution from the residue is proportial to, \begin{eqnarray} \coth\left(\frac{x-i\lambda+iLn\pi}{L}\right)+\coth\left(\frac{i\lambda-x+inL\pi}{L}\right)=0 \end{eqnarray} Similarly, for the term $\coth\left(\frac{\tau+i\lambda}{L}\right)\coth\left(\frac{\tau-x}{L}\right)$, the contribution of the residue is equal to 0. In our case, one could also perform the Hilbert tranform as a real line integral using the basic definition of principle value to arrive at the same Eq.~\ref{rhresult}. \section{Appendix D: Numerical techniques} For our HC Model we have an second order differential equation- Eq.~\ref{xddot} which is of the form, \begin{eqnarray} \ddot{x_i}=f(x_i,x_j) \end{eqnarray} where $i$ denotes the $i^{th}$ particle. There is no explicit time dependence in our differential equation. In order to perform RK4 \cite{jensen} we have to break the above equation into two sets of first order differential equations which are equivalent to the Hamilton's equation of motion. Thus we have, \begin{eqnarray} \dot{x_i}=p_i\;\textrm{and}\;\dot{p_i}=f(x_i,x_j) \end{eqnarray} where \begin{eqnarray} f(x_i,x_j)&&=-\frac{2A^{2}}{L^{3}}\sinh\left(\frac{2x_{i}}{L}\right)\cosh\left(\frac{2x_{i}}{L}\right)+\nonumber\\ &&\frac{2Ag}{L^{3}}(N-M-1)\sinh\left(\frac{2x_{i}}{L}\right)+\frac{2g^{2}}{L^{3}}\sum_{j\neq i}^{N}\left(\frac{\cosh\left(\frac{x_{i}-x_{j}}{L}\right)}{\sinh^{3}\left(\frac{x_{i}-x_{j}}{L}\right)}\right) \end{eqnarray} These equation are coupled to each other. So, here the RK4 steps for each particle are as follows, \begin{eqnarray} k_1(i)=dt\cdot(p_i)\hspace{1.25in} q_1(i)=dt\cdot f(x_i,x_j)\nonumber\\ k_2(i)=dt\cdot(p_i+0.5\cdot q_1(i))\hspace{0.4in} q_2(i)=dt\cdot\left(f(x_i,x_j)+0.5\cdot k_1(i)\cdot\frac{\mathrm{d}f}{\mathrm{d}x_i}\right) \nonumber\\ k_3(i)=dt\cdot(p_i+0.5\cdot q_2(i))\hspace{0.4in} q_3(i)=dt\cdot\left(f(x_i,x_j)+0.5\cdot k_2(i)\cdot\frac{\mathrm{d}f}{\mathrm{d}x_i}\right) \nonumber\\ k_4(i)=dt\cdot(p_i+q_3(i))\hspace{0.72in} q_4(i)=dt\cdot\left(f(x_i,x_j)+k_3(i)\cdot\frac{\mathrm{d}f}{\mathrm{d}x_i}\right) \nonumber \end{eqnarray} Therefore after $1^{st}$ iteration the new position and momentum values become, \begin{eqnarray} x_i(t+dt)=\frac{1}{6}\Big[ k_1(i)+k_2(i)+k_3(i)+k_4(i) \Big] \\ p_i(t+dt)=\frac{1}{6}\Big[ q_1(i)+q_2(i)+q_3(i)+q_4(i)) \Big] \end{eqnarray} We repeat this process at each time step to get the new position and momentum coordinates. \section*{References} \bibliographystyle{unsrt}
{ "timestamp": "2019-04-16T02:18:34", "yymm": "1904", "arxiv_id": "1904.06709", "language": "en", "url": "https://arxiv.org/abs/1904.06709" }